Estimating effect of multiple treatments#
[1]:
from dowhy import CausalModel
import dowhy.datasets
import warnings
warnings.filterwarnings('ignore')
[2]:
data = dowhy.datasets.linear_dataset(10, num_common_causes=4, num_samples=10000,
num_instruments=0, num_effect_modifiers=2,
num_treatments=2,
treatment_is_binary=False,
num_discrete_common_causes=2,
num_discrete_effect_modifiers=0,
one_hot_encode=False)
df=data['df']
df.head()
[2]:
X0 | X1 | W0 | W1 | W2 | W3 | v0 | v1 | y | |
---|---|---|---|---|---|---|---|---|---|
0 | -0.257775 | -1.816821 | -0.677777 | -0.192001 | 2 | 1 | 6.428839 | 6.021733 | 13.954816 |
1 | -1.988337 | 0.075061 | 2.273270 | 0.394716 | 0 | 3 | 3.158980 | 24.866780 | -271.960235 |
2 | -1.408438 | 0.344126 | 1.738955 | -0.559022 | 2 | 2 | 9.934381 | 19.971147 | -638.621666 |
3 | -0.992839 | -1.070089 | -0.718630 | -0.987752 | 3 | 1 | 7.781446 | 4.494957 | -44.571483 |
4 | -1.335100 | 0.422100 | 1.008882 | 0.087580 | 1 | 2 | 5.762307 | 15.015311 | -166.633190 |
[3]:
model = CausalModel(data=data["df"],
treatment=data["treatment_name"], outcome=data["outcome_name"],
graph=data["gml_graph"])
[4]:
model.view_model()
from IPython.display import Image, display
display(Image(filename="causal_model.png"))


[5]:
identified_estimand= model.identify_effect(proceed_when_unidentifiable=True)
print(identified_estimand)
Estimand type: EstimandType.NONPARAMETRIC_ATE
### Estimand : 1
Estimand name: backdoor
Estimand expression:
d
─────────(E[y|W3,W1,W0,W2])
d[v₀ v₁]
Estimand assumption 1, Unconfoundedness: If U→{v0,v1} and U→y then P(y|v0,v1,W3,W1,W0,W2,U) = P(y|v0,v1,W3,W1,W0,W2)
### Estimand : 2
Estimand name: iv
No such variable(s) found!
### Estimand : 3
Estimand name: frontdoor
No such variable(s) found!
Linear model#
Let us first see an example for a linear model. The control_value and treatment_value can be provided as a tuple/list when the treatment is multi-dimensional.
The interpretation is change in y when v0 and v1 are changed from (0,0) to (1,1).
[6]:
linear_estimate = model.estimate_effect(identified_estimand,
method_name="backdoor.linear_regression",
control_value=(0,0),
treatment_value=(1,1),
method_params={'need_conditional_estimates': False})
print(linear_estimate)
*** Causal Estimate ***
## Identified estimand
Estimand type: EstimandType.NONPARAMETRIC_ATE
### Estimand : 1
Estimand name: backdoor
Estimand expression:
d
─────────(E[y|W3,W1,W0,W2])
d[v₀ v₁]
Estimand assumption 1, Unconfoundedness: If U→{v0,v1} and U→y then P(y|v0,v1,W3,W1,W0,W2,U) = P(y|v0,v1,W3,W1,W0,W2)
## Realized estimand
b: y~v0+v1+W3+W1+W0+W2+v0*X1+v0*X0+v1*X1+v1*X0
Target units: ate
## Estimate
Mean value: -54.18984219940326
You can estimate conditional effects, based on effect modifiers.
[7]:
linear_estimate = model.estimate_effect(identified_estimand,
method_name="backdoor.linear_regression",
control_value=(0,0),
treatment_value=(1,1))
print(linear_estimate)
*** Causal Estimate ***
## Identified estimand
Estimand type: EstimandType.NONPARAMETRIC_ATE
### Estimand : 1
Estimand name: backdoor
Estimand expression:
d
─────────(E[y|W3,W1,W0,W2])
d[v₀ v₁]
Estimand assumption 1, Unconfoundedness: If U→{v0,v1} and U→y then P(y|v0,v1,W3,W1,W0,W2,U) = P(y|v0,v1,W3,W1,W0,W2)
## Realized estimand
b: y~v0+v1+W3+W1+W0+W2+v0*X1+v0*X0+v1*X1+v1*X0
Target units:
## Estimate
Mean value: -54.18984219940326
### Conditional Estimates
__categorical__X1 __categorical__X0
(-4.567, -1.725] (-4.399, -1.75] -151.096277
(-1.75, -1.164] -105.384299
(-1.164, -0.662] -76.732256
(-0.662, -0.0572] -47.937022
(-0.0572, 3.238] 1.314341
(-1.725, -1.147] (-4.399, -1.75] -138.476042
(-1.75, -1.164] -91.807058
(-1.164, -0.662] -62.519469
(-0.662, -0.0572] -33.528966
(-0.0572, 3.238] 14.827062
(-1.147, -0.637] (-4.399, -1.75] -128.769920
(-1.75, -1.164] -83.709891
(-1.164, -0.662] -54.501820
(-0.662, -0.0572] -25.566340
(-0.0572, 3.238] 22.731651
(-0.637, -0.0533] (-4.399, -1.75] -124.400697
(-1.75, -1.164] -74.202575
(-1.164, -0.662] -46.237349
(-0.662, -0.0572] -16.780231
(-0.0572, 3.238] 32.365639
(-0.0533, 2.942] (-4.399, -1.75] -109.477877
(-1.75, -1.164] -62.530083
(-1.164, -0.662] -32.608161
(-0.662, -0.0572] -3.345889
(-0.0572, 3.238] 43.630256
dtype: float64
More methods#
You can also use methods from EconML or CausalML libraries that support multiple treatments. You can look at examples from the conditional effect notebook: https://py-why.github.io/dowhy/example_notebooks/dowhy-conditional-treatment-effects.html
Propensity-based methods do not support multiple treatments currently.