Deep learning lacks causality

Gaining mechanistic understanding requires knowledge about causality. Science aims to distinguish genuine causes from spurious correlations. Hume argued that empiricism can’t consider causation at all. Logic positivism argues that logic is the only source of knowledge, while empiricism argues that observation is the only one. As causality is not observable, empiricists were skeptical about causation. Correlation made the study of causation slightly more tangible, due to Galton. Correlation is problematic, as the causal relation might go either way, or some other way entirely. Causal structures explain correlation, and correlation can provide evidence for causation. The causal structure explains why changing the barometer reading can’t prevent storms. Interventionism attempts to distinguish correlation from causation by systematic manipulations in experiments. The introduction of interventionism was termed the “Causal Revolution.” Causal relationships can be represented with structural graphical models. Based on some rules, causal models can shed light on causal relationships. The Ladder of Causation goes as follows: association (related to correlation), intervention (related to systematic manipulation), and counterfactuals (related to falsifiability). Pearl argues that causal models are required in order to answer counterfactuals. He argues that AI is stuch on the first step of the ladder of causation, namely association. In science, deep learning appears to be good at pattern detection, yet not that good at formulating theories or causal models. Darwin’s causal explanation of the origin of species is a significant example of causal models. Judea Pearl argues that we need to teach AI cause and effect. Advances in AI leads to many important philosophical questions.

Backlinks