This is a one-day tutorial, consists of two half-day sessions.
The morning session is on Machine Learning for Causal Inference, the afternoon session is on Causal Inference and Stable Learning.
Causal inference has numerous real-world applications in many domains such as health care, marketing, political science and online advertising. Treatment effect estimation, a fundamental problem in causal inference, has been extensively studied in statistics for decades. However, traditional treatment effect estimation methods may not well handle large-scale and high-dimensional heterogeneous data. In recent years, an emerging research direction has attracted increasing attention in the broad artificial intelligence field, which combines the advantages of traditional treatment effect estimation approaches (e.g., matching estimators) and advanced representation learning approaches (e.g., deep neural networks). In this tutorial, we will introduce both traditional and state-of-the-art representation learning algorithms for treatment effect estimation. Background about causal inference, counterfactuals and matching estimators will be covered as well. We will also showcase promising applications of these methods in different application domains.
1. Welcome from Organizers: 8:30 AM - 8:35 AM
2. Background on Causal Inference: 8:35 AM - 9:00 AM
3. Classical Causal Inference Methods: 9:00 AM - 9:30 AM
4. Subspace Learning for Causal Inference: 9:30 AM - 10:00 AM
5. Coffee Break: 10:00 AM - 10:30 AM
6. Deep Representation Learning for Causal Inference: 10:30 AM - 11:00 AM
7. Applications: 11:00 AM - 11:25 AM
8. Conclusions and Future Perspectives: 11:25 AM - 11:45 AM
9. Closing Remarks: 11:45AM - 12:00 AM
Predicting future outcome values based on their observed features using a model estimated on a training data set in a common machine learning problem. Many learning algorithms have been proposed and shown to be successful when the test data and training data come from the same distribution. However, the best-performing models for a given distribution of training data typically exploit subtle statistical relationships among features, making them potentially more prone to prediction error when applied to test data whose distribution differs from that in training data. How to develop learning models that are stable and robust to shifts in data is of paramount importance for both academic research and real applications.
Causal inference, which refers to the process of drawing a conclusion about a causal connection based on the conditions of the occurrence of an effect, is a powerful statistical modeling tool for explanatory and stable learning. In this tutorial, we focus on causal inference and stable learning, aiming to explore causal knowledge from observational data to improve the interpretability and stability of machine learning algorithms. First, we will give introduction to causal inference and introduce some recent data-driven approaches to estimate causal effect from observational data, especially in high dimensional setting. Aiming to bridge the gap between causal inference and machine learning for stable learning, we first give the definition of stability and robustness of learning algorithms, then will introduce some recently stable learning algorithms for improving the stability and interpretability of prediction. Finally, we will discuss the applications and future directions of stable learning, and provide the benchmark for stable learning.
1. Causal Inference and Its Implication for Learning:
2. Learning Stability and Robustness:
3. Theory and Algorithms for Stable Learning:
4. Applications and Benchmarks:
1:00 ~ 2:00 PM PDT First Session
2:00 ~ 2:30 PM PDT Break
2:30 ~ 3:30 PM PDT Second Session
3:30 ~ 4:00 PM PDT Q&A
Tsinghua University, China
Tsinghua University, China