Inside the Custom Framework for Managing Airflow Code at Wix with Gil Reich
Efficient orchestration and maintainability are crucial for data engineering at scale. Gil Reich, Data Developer for Data Science at Wix, shares how his team reduced code duplication, standardized pipelines, and improved Airflow task orchestration using a Python-based framework built within the data science team.In this episode, Gil explains how this internal framework simplifies DAG creation, improves documentation accuracy, and enables consistent task generation for machine learning pipelines. He also shares lessons from complex DAG optimization and maintaining testable code.Key Takeaways:(03:23) Code duplication creates long-term problems.(08:16) Frameworks bring order to complex pipelines.(09:41) Shared functions cut down repetitive code.(17:18) Auto-generated docs stay accurate by design.(22:40) On-demand DAGs support real-time workflows.(25:08) Task-level sensors improve run efficiency.(27:40) Combine local runs with automated tests.(30:09) Clean code helps teams scale faster.Resources Mentioned:Gil Reichhttps://www.linkedin.com/in/gilreich/Wix | LinkedInhttps://www.linkedin.com/company/wix-com/Wix | Websitehttps://www.wix.com/DS DAG Frameworkhttps://airflowsummit.org/slides/2024/92-refactoring-dags.pdfApache Airflowhttps://airflow.apache.org/https://www.astronomer.io/events/roadshow/london/ https://www.astronomer.io/events/roadshow/new-york/ https://www.astronomer.io/events/roadshow/sydney/ https://www.astronomer.io/events/roadshow/san-francisco/ https://www.astronomer.io/events/roadshow/chicago/Thanks for listening to “The Data Flowcast: Mastering Apache Airflow® for Data Engineering and AI.” If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss any of the insightful conversations.#AI #Automation #Airflow #MachineLearning
--------
31:02
Modernizing Legacy Data Systems With Airflow at Procter & Gamble with Adonis Castillo Cordero
Legacy architecture and AI workloads pose unique challenges at scale, especially in a global enterprise with complex data systems. In this episode, we explore strategies to proactively monitor and optimize pipelines while minimizing downstream failures.Adonis Castillo Cordero, Senior Automation Manager at Procter & Gamble, joins us to share actionable best practices for dependency mapping, anomaly detection and architecture simplification using Apache Airflow.Key Takeaways:(03:13) Integrating legacy data systems into modern architecture.(05:51) Designing workflows for real-time data processing.(07:57) Mapping dependencies early to avoid pipeline failures.(09:02) Building automated monitoring into orchestration frameworks.(12:09) Detecting anomalies to prevent performance bottlenecks.(15:24) Monitoring data quality to catch silent failures.(17:02) Prioritizing responses based on impact severity.(18:55) Simplifying dashboards to highlight critical metrics.Resources Mentioned:Adonis Castillo Corderohttps://www.linkedin.com/in/adoniscc/Procter & Gamble | LinkedInhttps://www.linkedin.com/company/procter-and-gamble/Procter & Gamble | Websitehttp://www.pg.comApache Airflowhttps://airflow.apache.org/OpenLineagehttps://openlineage.io/Azure Monitorhttps://azure.microsoft.com/en-us/products/monitor/AWS Lookout for Metricshttps://aws.amazon.com/lookout-for-metrics/Monte Carlohttps://www.montecarlodata.com/Great Expectationshttps://greatexpectations.io/https://www.astronomer.io/events/roadshow/london/ https://www.astronomer.io/events/roadshow/new-york/ https://www.astronomer.io/events/roadshow/sydney/ https://www.astronomer.io/events/roadshow/san-francisco/ https://www.astronomer.io/events/roadshow/chicago/Thanks for listening to “The Data Flowcast: Mastering Apache Airflow® for Data Engineering and AI.” If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss any of the insightful conversations.#AI #Automation #Airflow #MachineLearning
--------
22:13
Building an End-to-End Data Observability System at Netflix with Joseph Machado
Building reliable data pipelines starts with maintaining strong data quality standards and creating efficient systems for auditing, publishing and monitoring. In this episode, we explore the real-world patterns and best practices for ensuring data pipelines stay accurate, scalable and trustworthy.Joseph Machado, Senior Data Engineer at Netflix, joins us to share practical insights gleaned from supporting Netflix’s Ads business as well as over a decade of experience in the data engineering space. He discusses implementing audit publish patterns, building observability dashboards, defining in-band and separate data quality checks, and optimizing data validation across large-scale systems.Key Takeaways:.(03:14) Supporting data privacy and engineering efficiency within data systems.(10:41) Validating outputs with reconciliation checks to catch transformation issues.(16:06) Applying standardized patterns for auditing, validating and publishing data.(19:28) Capturing historical check results to monitor system health and improvements.(21:29) Treating data quality and availability as separate monitoring concerns.(26:26) Using containerization strategies to streamline pipeline executions.(29:47) Leveraging orchestration platforms for better visibility and retry capability.(31:59) Managing business pressure without sacrificing data quality practices.(35:46) Starting simple with quality checks and evolving toward more complex frameworks.Resources Mentioned:Joseph Machadohttps://www.linkedin.com/in/josephmachado1991/Netflix | LinkedInhttps://www.linkedin.com/company/netflix/Netflix | Websitehttps://www.netflix.com/browseStart Data Engineeringhttps://www.startdataengineering.com/Apache Airflowhttps://airflow.apache.org/dbt Labshttps://www.getdbt.com/Great Expectationshttps://greatexpectations.io/https://www.astronomer.io/events/roadshow/london/https://www.astronomer.io/events/roadshow/new-york/ https://www.astronomer.io/events/roadshow/sydney/https://www.astronomer.io/events/roadshow/san-francisco/https://www.astronomer.io/events/roadshow/chicago/Thanks for listening to “The Data Flowcast: Mastering Apache Airflow® for Data Engineering and AI.” If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss any of the insightful conversations.#AI #Automation #Airflow #MachineLearning
--------
38:54
Why Developer Experience Shapes Data Pipeline Standards at Next Insurance with Snir Israeli
Creating consistency across data pipelines is critical for scaling engineering teams and ensuring long-term maintainability.In this episode, Snir Israeli, Senior Data Engineer at Next Insurance, shares how enforcing coding standards and investing in developer experience transformed their approach to data engineering. He explains how implementing automated code checks, clear documentation practices and a scoring system helped drive alignment across teams, improve collaboration and reduce technical debt in a fast-growing data environment.Key Takeaways:(02:59) Inconsistencies in code style create challenges for collaboration and maintenance.(04:22) Programmatically enforcing rules helps teams scale their best practices.(08:55) Performance improvements in data pipelines lead to infrastructure cost savings.(13:22) Developer experience is essential for driving adoption of internal tools.(19:44) Dashboards can operationalize standards enforcement and track progress over time.(22:49) Standardization accelerates onboarding and reduces friction in code reviews.(25:39) Linting rules require ongoing maintenance as tools and platforms evolve.(27:47) Starting small and involving the team leads to better adoption and long-term success.Resources Mentioned:Snir Israelihttps://www.linkedin.com/in/snir-israeli/Next Insurance | LinkedInhttps://www.linkedin.com/company/nextinsurance/Next Insurance | Websitehttps://www.nextinsurance.com/Apache Airflowhttps://airflow.apache.org/https://www.astronomer.io/events/roadshow/london/ https://www.astronomer.io/events/roadshow/new-york/ https://www.astronomer.io/events/roadshow/sydney/ https://www.astronomer.io/events/roadshow/san-francisco/ https://www.astronomer.io/events/roadshow/chicago/ Thanks for listening to “The Data Flowcast: Mastering Apache Airflow® for Data Engineering and AI.” If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss any of the insightful conversations.#AI #Automation #Airflow #MachineLearning
--------
30:28
Data Quality and Observability at Tekmetric with Ipsa Trivedi
Airflow’s adaptability is driving Tekmetric’s ability to unify complex data workflows, deliver accurate insights and support both internal operations and customer-facing services — all within a rapidly growing startup environment.In this episode, Ipsa Trivedi, Lead Data Engineer at Tekmetric, shares how her team is standardizing pipelines while supporting unique customer needs. She explains how Airflow enables end-to-end data services, simplifies orchestration across varied sources and supports scalable customization. Ipsa also highlights early wins with Airflow, its intuitive UI and the team's roadmap toward data quality, observability and a future self-serve data platform.Key Takeaways:(02:26) Powering auto shops nationwide with a unified platform.(05:17) A new data team was formed to centralize and scale insights.(07:23) Flexible, open source and made to fit — Airflow wins.(10:42) Pipelines handle anything from email to AWS.(12:15) Custom DAGs fit every team’s unique needs.(17:01) Data quality checks are built into the plan.(18:17) Self-serve data mesh is the end goal.(19:59) Airflow now fits so well, there's nothing left on the wishlist.Resources Mentioned:Ipsa Trivedihttps://www.linkedin.com/in/ipsatrivedi/Tekmetric | LinkedInhttps://www.linkedin.com/company/tekmetric/Tekmetric | Websitehttps://www.tekmetric.com/Apache Airflowhttps://airflow.apache.org/AWS RDShttps://aws.amazon.com/free/database/?trk=fc551e06-56b0-418c-9ddd-5c9dba18569b&sc_channel=ps&ef_id=CjwKCAjwzMi_BhACEiwAX4YZULS4jV2Xpnpcac_Q3eS9BAg-klKUDyCt6XSdOul8BLHkmWzFFh4NXRoCGhQQAvD_BwE:G:s&s_kwcid=AL!4422!3!548989592596!e!!g!!amazon%20sql%20database!11543056228!112002958549&gclid=CjwKCAjwzMi_BhACEiwAX4YZULS4jV2Xpnpcac_Q3eS9BAg-klKUDyCt6XSdOul8BLHkmWzFFh4NXRoCGhQQAvD_BwEAstro by Astronomerhttps://www.astronomer.io/product/https://www.astronomer.io/events/roadshow/london/ https://www.astronomer.io/events/roadshow/new-york/ https://www.astronomer.io/events/roadshow/sydney/ https://www.astronomer.io/events/roadshow/san-francisco/ https://www.astronomer.io/events/roadshow/chicago/ Thanks for listening to “The Data Flowcast: Mastering Apache Airflow® for Data Engineering and AI.” If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss any of the insightful conversations.#AI #Automation #Airflow #MachineLearning
About The Data Flowcast: Mastering Apache Airflow ® for Data Engineering and AI
Welcome to The Data Flowcast: Mastering Apache Airflow ® for Data Engineering and AI— the podcast where we keep you up to date with insights and ideas propelling the Airflow community forward.
Join us each week, as we explore the current state, future and potential of Airflow with leading thinkers in the community, and discover how best to leverage this workflow management system to meet the ever-evolving needs of data engineering and AI ecosystems.
Podcast Webpage: https://www.astronomer.io/podcast/
Listen to The Data Flowcast: Mastering Apache Airflow ® for Data Engineering and AI, How I AI and many other podcasts from around the world with the radio.net app