TY - JOUR
T1 - Evaluation Methods and Measures for Causal Learning Algorithms
AU - Cheng, Lu
AU - Guo, Ruocheng
AU - Moraffah, Raha
AU - Sheth, Paras
AU - Candan, K. Selcuk
AU - Liu, Huan
N1 - Funding Information: This work was supported in part by the National Science Foundation under Grant 1909555, Grant 2029044, Grant 2125246, Grant 1633381, and Grant 1610282, in part by the Association for Research Libraries under Grant W911NF2020124, and in part by the U.S. Army Materiel Command under Grant W911NF2110030. Publisher Copyright: © 2020 IEEE.
PY - 2022/12/1
Y1 - 2022/12/1
N2 - The convenient access to copious multifaceted data has encouraged machine learning researchers to reconsider correlation-based learning and embrace the opportunity of causality-based learning, i.e., causal machine learning (causal learning). Recent years have, therefore, witnessed great effort in developing causal learning algorithms aiming to help artificial intelligence (AI) achieve human-level intelligence. Due to the lack of ground-truth data, one of the biggest challenges in current causal learning research is algorithm evaluations. This largely impedes the cross-pollination of AI and causal inference and hinders the two fields to benefit from the advances of the other. To bridge from conventional causal inference (i.e., based on statistical methods) to causal learning with Big Data (i.e., the intersection of causal inference and machine learning), in this survey, we review commonly used datasets, evaluation methods, and measures for causal learning using an evaluation pipeline similar to conventional machine learning. We focus on the two fundamental causal inference tasks and causality-aware machine learning tasks. Limitations of current evaluation procedures are also discussed. We, then, examine popular causal inference tools/packages and conclude with primary challenges and opportunities for benchmarking causal learning algorithms in the era of Big Data. The survey seeks to bring to the forefront the urgency of developing publicly available benchmarks and consensus-building standards for causal learning evaluation with observational data. In doing so, we hope to broaden the discussions and facilitate collaboration to advance the innovation and application of causal learning.
AB - The convenient access to copious multifaceted data has encouraged machine learning researchers to reconsider correlation-based learning and embrace the opportunity of causality-based learning, i.e., causal machine learning (causal learning). Recent years have, therefore, witnessed great effort in developing causal learning algorithms aiming to help artificial intelligence (AI) achieve human-level intelligence. Due to the lack of ground-truth data, one of the biggest challenges in current causal learning research is algorithm evaluations. This largely impedes the cross-pollination of AI and causal inference and hinders the two fields to benefit from the advances of the other. To bridge from conventional causal inference (i.e., based on statistical methods) to causal learning with Big Data (i.e., the intersection of causal inference and machine learning), in this survey, we review commonly used datasets, evaluation methods, and measures for causal learning using an evaluation pipeline similar to conventional machine learning. We focus on the two fundamental causal inference tasks and causality-aware machine learning tasks. Limitations of current evaluation procedures are also discussed. We, then, examine popular causal inference tools/packages and conclude with primary challenges and opportunities for benchmarking causal learning algorithms in the era of Big Data. The survey seeks to bring to the forefront the urgency of developing publicly available benchmarks and consensus-building standards for causal learning evaluation with observational data. In doing so, we hope to broaden the discussions and facilitate collaboration to advance the innovation and application of causal learning.
KW - Benchmarking
KW - Big Data
KW - causal inference
KW - causal learning
KW - evaluation
UR - http://www.scopus.com/inward/record.url?scp=85130431240&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85130431240&partnerID=8YFLogxK
U2 - 10.1109/TAI.2022.3150264
DO - 10.1109/TAI.2022.3150264
M3 - Article
SN - 2691-4581
VL - 3
SP - 924
EP - 943
JO - IEEE Transactions on Artificial Intelligence
JF - IEEE Transactions on Artificial Intelligence
IS - 6
ER -