Anatole, thanks for the kind words and for you question!
Are you referring to causality or Simpson's Paradox?
The latter, of course, is only one aspect of the former.
At per Data Science I'm thinking in terms of analyses and machine learning.
In analyses those who master Simpson's paradox will be able to be more alert to avoiding it an improving how they interpret data. I was delighted in a meeting with a lead engineer who pointed out the need to account for this when planning what data to collect an analyse.
In machine learning I personally found that it helped me understand how to improve feature selection. (Maybe subject for a future article :-) )
As I mention in a previous article
http://bit.ly/start-ask-why-medium
Simpson's Paradox is just one aspect of Causality. Within I mention that even though Simpson's Paradox is solved by controlling for a confounding common cause, there are situations where one should not control because they might generate spurious correlations (see the discussion about Berkson's paradox). Hence the importance of using Graph Models to justify which parameters to control and which not.
In ML this justification of which to control and which not might help in feature selection.
Ultimately, if the setting is setup properly one can aspire not only to report not only what has been seen but also estimate impact (causality). Although Causality not always possible, the thought process for framing a problem and how to solve it is always fruitful.
I hope that starts to address your question. I'm sure that this is far from an exhaustive answer.