A cyclist disappears to the model, not to your eyes—and that mismatch is the heart of safety-critical AI. We open with the “vanishing cyclist” to show how tiny, imperceptible perturbations can flip life-or-death decisions, then walk through a practical path to trust that spans data, verification, and deployment. Along the way, we share real stories from BMW, Airbus, and Madrid Metro to ground the engineering in results, not hype.
We break down how to build a resilient pipeline: domain-specific data labeling, realistic synthetic generation for rare and risky scenarios, and tight interoperability across MATLAB, Python, PyTorch, TensorFlow, and ONNX. We dig into explainability beyond classification with D-RISE for object detectors and semantic segmentation, helping you see what the network actually uses to decide. Then we raise the bar with formal verification for robustness—mathematical guarantees within defined perturbation sets—so you aren’t mistaking the absence of found attacks for true safety.
Finally, we get practical about the edge. Model compression and projection recover accuracy with fewer parameters, enabling fast, power-efficient deployment to CPUs, GPUs, and FPGAs, backed by code generation for the entire application. We also cover runtime safeguards like out-of-distribution detection to catch smog-on-the-runway moments and escalate safely. Throughout, we connect the work to evolving standards, the EU AI Act, and updated workflows that adapt the V-model for learning systems, so your process and artifacts are ready for audits and certification.
If you care about trustworthy AI for cars, planes, rail, and medical devices—and want tools and habits that survive contact with reality—this one’s for you. Listen, subscribe, and leave a review with your biggest trust gap or the safeguard you’d ship first.
Send us Fan Mail
Support the show
Learn more about the EDGE AI FOUNDATION - edgeaifoundation.org