Roni Kobrosly
I spent nearly a decade employing causal modeling and inference in academia as an epidemiologist, and since 2015 then I've been employing these approaches as an industry data scientist / ML engineer. I also am a member of the open-source community, being the author and maintainer of the causal-curve
python package (https://github.com/ronikobrosly/causal-curve). I am currently a Director of Data Science at Capital One.

Sessions
It's common for machine learning practitioners to train a supervised learning model, generate feature importance metrics, and then attempt to use these values to tell a data story that suggests what interventions should be taken to drive the outcome variable a favorable way (e.g. "X was an important feature in our churn prediction model, so we should consider doing more X to reduce churn"). This simply does not work, and the idea that standard feature importance measures can be interpretted causally is one of data science's more enduring myths. In this session we'll talk through why this isn't the case, what feature importance is actually good for, and we'll give a brief overview of a simple causal feature importance approach: Meta Learners. This talk should be relevant to machine learning practitioners of any skill level that want to gain actionable, causal insights from their predictive models.