Powered by RND

EDGE AI POD

EDGE AI FOUNDATION
EDGE AI POD
Latest episode

Available Episodes

5 of 67
  • Comparative Analysis of NPU Optimized Software Framework
    The future of AI isn't just in massive cloud servers—it's already sitting in your pocket. In this eye-opening presentation, Yeon-seok, CEO and co-founder of JTIC AI, reveals how his company is revolutionizing the AI landscape by tapping into the underutilized Mobile Processing Units (MPUs) that have been standard in smartphones since 2017.While tech giants pour billions into cloud infrastructure, JTIC AI has identified a critical opportunity: leveraging the powerful AI processors already in billions of devices worldwide. This approach delivers not just cost savings, but crucial advantages including offline functionality, enhanced data security, and real-time responsiveness—without depending on internet connectivity.The technical journey involves three essential components: hardware utilization, model optimization, and runtime software. Yeon-seok breaks down sophisticated model optimization techniques like pruning, quantization, and knowledge distillation that make complex AI models deployable to mobile devices. However, the biggest challenge isn't hardware capability but software fragmentation. Unlike the GPU market dominated by NVIDIA and CUDA, mobile devices operate in a fragmented ecosystem where Apple, Qualcomm, MediaTek, and others maintain incompatible software stacks—creating significant barriers for AI engineers.JTIC AI's innovative solution is an end-to-end automated pipeline that handles everything from model optimization to device-specific benchmarking. Their system can determine which runtime will deliver optimal performance for specific models on specific devices—something that's impossible to predict without comprehensive testing. With this approach, developers can deploy sophisticated AI across the mobile ecosystem without wrestling with manufacturer-specific implementations.Ready to unlock the AI capabilities already sitting in your users' pockets? Discover how on-device AI can transform your applications with better privacy, offline functionality, and faster response times—all while reducing your cloud infrastructure costs.Send us a textSupport the showLearn more about the EDGE AI FOUNDATION - edgeaifoundation.org
    --------  
    13:32
  • Powering Intelligence: Anaflash's Revolutionary AI Microcontroller with Embedded Flash Memory
    Memory bottlenecks, not computational limitations, are the true barrier holding back Edge AI. This revelation lies at the heart of Anaflash's revolutionary approach to intelligent edge computing – a breakthrough AI microcontroller with embedded flash memory that transforms how we think about power efficiency and cost in smart devices.The team has engineered a solution that addresses the two fundamental challenges facing Edge AI adoption: power efficiency and cost. Their microcontroller features zero-standby, power-weight memory with 4-bit per-cell embedded flash technology seamlessly integrated with computation resources. Unlike traditional non-volatile memory options that demand extra processing steps and offer limited storage density, this technology requires no additional masks and scales efficiently.At the core of this innovation is the Near Memory Computing Unit (NMCU), which establishes a tight coupling with flash memory through a wide I/O interface on a single chip. This architecture eliminates the need to fetch data from external memory after booting or waking from deep sleep – a game-changing feature for battery-powered devices. The NMCU's sophisticated three-part design enhances parallel computations while minimizing CPU intervention: control logic manages weight addresses and buffer flow, 16 processing elements share weights through high-bandwidth connections, and a quantization block efficiently converts computational results.Fabricated using Samsung Foundry's 28nm standard logic process in a compact 4 by 4.5 mm² die, the microcontroller delivers impressive results. Testing with MNIST and Deep Auto Encoder models demonstrates accuracy levels virtually identical to software baselines – over 95% and 0.878 AUC respectively. The overstress-free waterline driver circuit extends flash cell margins, further enhancing reliability and performance.Ready to transform your Edge AI applications with technology that combines unprecedented efficiency, performance, and cost-effectiveness? Experience the future of intelligent edge computing with Anaflash's embedded flash microcontroller – where memory and computation unite to power the next generation of smart devices.Send us a textSupport the showLearn more about the EDGE AI FOUNDATION - edgeaifoundation.org
    --------  
    15:21
  • Enhancing Field Oriented Control of Electric Drives with tiny Neural Network
    Ever wondered how the electric vehicles of tomorrow will squeeze every last drop of efficiency from their batteries? The answer lies at the fascinating intersection of artificial intelligence and motor control.The electrification revolution in automotive technology demands increasingly sophisticated control systems for permanent magnet synchronous motors - the beating heart of electric vehicle propulsion. These systems operate at mind-boggling speeds, with control loops closing every 50 microseconds (that's 20,000 times per second!), and future systems pushing toward 10 microseconds. Traditional PID controllers, while effective under steady conditions, struggle with rapid transitions, creating energy-wasting overshoots that drain precious battery life.Our groundbreaking research presents a neural network approach that drastically reduces these inefficiencies. By generating time-varying compensation factors, our AI solution cuts maximum overshoots by up to 70% in challenging test scenarios. The methodology combines MatWorks' development tools with ST's microcontroller technology in a deployable package requiring just 1,700 parameters - orders of magnitude smaller than typical deep learning models.While we've made significant progress, challenges remain. Current deployment achieves 70-microsecond inference times on automotive-grade microcontrollers, still shy of our ultimate 10-microsecond target. Hardware acceleration represents the next frontier, along with exploring higher-level models and improved training methodologies. This research opens exciting possibilities for squeezing maximum efficiency from electric vehicles, turning previously wasted energy into extended range and performance. Curious about the technical details? Our complete paper is available on arXiv - scan the QR code to dive deeper into the future of smart motor control.Send us a textSupport the showLearn more about the EDGE AI FOUNDATION - edgeaifoundation.org
    --------  
    16:59
  • Transforming Human-Computer Interaction with OpenVINO
    The gap between science fiction and reality is closing rapidly. Remember when talking to computers was just a fantasy in movies? Raymond Lo's presentation on building chatbots with OpenVINO reveals how Intel is transforming ordinary PCs into extraordinary AI companions.Imagine generating a photorealistic teddy bear image in just eight seconds on your laptop's integrated GPU. Or having a natural conversation with a locally-running chatbot that doesn't need cloud connectivity. These scenarios aren't futuristic dreams – they're happening right now thanks to breakthroughs in optimizing AI models for consumer hardware.The key breakthrough isn't just raw computational power but intelligent optimization. When Raymond's team first attempted to run large language models locally, they didn't face computational bottlenecks – they hit memory walls. Models simply wouldn't fit in available RAM. Through sophisticated compression techniques like quantization, they've reduced memory requirements by 75% while maintaining remarkable accuracy. The Neural Network Compression Framework (NNCF) now allows developers to experiment with different compression techniques to find the perfect balance between size and performance.What makes this particularly exciting is the deep integration with Windows and other platforms. Microsoft's AI Foundry now incorporates OpenVINO technology, meaning when you purchase a new PC, it comes ready to deliver optimized AI experiences out of the box. This represents a fundamental shift in how we think about computing – from tools we command with keyboards and mice to companions we converse with naturally.For developers, OpenVINO offers a treasure trove of resources – hundreds of notebooks with examples ranging from computer vision to generative AI. This dramatically accelerates development cycles, turning what used to take months into weeks. As Raymond revealed, even complex demos can be created in just two weeks using these tools.Ready to transform your PC into an AI powerhouse? Explore OpenVINO today and join the revolution in human-computer interaction. Your next conversation partner might be sitting on your desk already.Send us a textSupport the showLearn more about the EDGE AI FOUNDATION - edgeaifoundation.org
    --------  
    43:26
  • Energy Efficient and high throughput inference using compressed tsetlin machine
    Logic beats arithmetic in the machine learning revolution happening at Newcastle University. From a forgotten Soviet mathematician's work in the 1960s to modern embedded systems, Settle Machine represents a paradigm shift in how we approach artificial intelligence.Unlike traditional neural networks that rely on complex mathematical operations, Settle Machine harnesses Boolean logic - simple yes/no questions similar to how humans naturally think. This "white box" approach creates interpretable models using only AND gates, OR gates, and NOT gates without any multiplication operations. The result? Machine learning that's not only understandable but dramatically more efficient.The technical magic happens through a process called Booleanization, converting input data into binary questions that feed learning automata. These finite state machines work in parallel, creating logical patterns that combine to make decisions. What's remarkable is the natural sparsity of the resulting models - for complex tasks like image recognition, more than 99% of potential features are automatically excluded. By further optimizing this sparsity and removing "weak includes," Newcastle's team has achieved astonishing efficiency improvements.The numbers don't lie: 10x faster inference time than Binarized Neural Networks, dramatically lower memory footprint, and energy efficiency improvements around 20x on embedded platforms. Their latest microchip implementation consumes just 8 nanojoules per frame for MNIST character recognition - likely the lowest energy consumption ever published for this benchmark. For edge computing and IoT applications where power constraints are critical, this breakthrough opens new possibilities.Beyond efficiency, Settle Machine addresses the growing demand for explainable AI. As regulations tighten around automated decision-making, the clear logical propositions generated by this approach provide transparency that black-box neural networks simply can't match. Ready to explore this revolutionary approach? Visit settlemachine.org or search for the unified GitHub repository to get started with open-source implementations.Send us a textSupport the showLearn more about the EDGE AI FOUNDATION - edgeaifoundation.org
    --------  
    20:21

More Technology podcasts

About EDGE AI POD

Discover the cutting-edge world of energy-efficient machine learning, edge AI, hardware accelerators, software algorithms, and real-world use cases with this podcast feed from all things in the world's largest EDGE AI community. These are shows like EDGE AI Talks, EDGE AI Blueprints as well as EDGE AI FOUNDATION event talks on a range of research, product and business topics. Join us to stay informed and inspired!
Podcast website

Listen to EDGE AI POD, The AI Daily Brief: Artificial Intelligence News and Analysis and many other podcasts from around the world with the radio.net app

Get the free radio.net app

  • Stations and podcasts to bookmark
  • Stream via Wi-Fi or Bluetooth
  • Supports Carplay & Android Auto
  • Many other app features
Social
v7.23.12 | © 2007-2025 radio.de GmbH
Generated: 11/19/2025 - 5:30:29 AM