Estimating effect of multiple treatments#
[1]:
from dowhy import CausalModel
import dowhy.datasets
import warnings
warnings.filterwarnings('ignore')
[2]:
data = dowhy.datasets.linear_dataset(10, num_common_causes=4, num_samples=10000,
num_instruments=0, num_effect_modifiers=2,
num_treatments=2,
treatment_is_binary=False,
num_discrete_common_causes=2,
num_discrete_effect_modifiers=0,
one_hot_encode=False)
df=data['df']
df.head()
[2]:
X0 | X1 | W0 | W1 | W2 | W3 | v0 | v1 | y | |
---|---|---|---|---|---|---|---|---|---|
0 | 0.921327 | 1.390097 | 0.762682 | 0.498981 | 0 | 1 | 4.025165 | 10.702244 | 521.876393 |
1 | 2.860156 | 0.448513 | -0.614746 | 0.745837 | 3 | 3 | 6.862402 | 29.439688 | 1720.771628 |
2 | 1.098624 | 1.109132 | 0.508865 | 1.516299 | 0 | 0 | 4.056166 | 1.848361 | 122.478391 |
3 | 0.852896 | -1.583007 | 0.769158 | 0.155863 | 1 | 3 | 9.428646 | 24.507579 | -1250.825929 |
4 | 1.066358 | -0.863283 | -0.692260 | 0.433443 | 3 | 0 | 2.457265 | 12.994037 | 79.769286 |
[3]:
model = CausalModel(data=data["df"],
treatment=data["treatment_name"], outcome=data["outcome_name"],
graph=data["gml_graph"])
[4]:
model.view_model()
from IPython.display import Image, display
display(Image(filename="causal_model.png"))


[5]:
identified_estimand= model.identify_effect(proceed_when_unidentifiable=True)
print(identified_estimand)
Estimand type: EstimandType.NONPARAMETRIC_ATE
### Estimand : 1
Estimand name: backdoor
Estimand expression:
d
─────────(E[y|W3,W0,W1,W2])
d[v₀ v₁]
Estimand assumption 1, Unconfoundedness: If U→{v0,v1} and U→y then P(y|v0,v1,W3,W0,W1,W2,U) = P(y|v0,v1,W3,W0,W1,W2)
### Estimand : 2
Estimand name: iv
No such variable(s) found!
### Estimand : 3
Estimand name: frontdoor
No such variable(s) found!
Linear model#
Let us first see an example for a linear model. The control_value and treatment_value can be provided as a tuple/list when the treatment is multi-dimensional.
The interpretation is change in y when v0 and v1 are changed from (0,0) to (1,1).
[6]:
linear_estimate = model.estimate_effect(identified_estimand,
method_name="backdoor.linear_regression",
control_value=(0,0),
treatment_value=(1,1),
method_params={'need_conditional_estimates': False})
print(linear_estimate)
*** Causal Estimate ***
## Identified estimand
Estimand type: EstimandType.NONPARAMETRIC_ATE
### Estimand : 1
Estimand name: backdoor
Estimand expression:
d
─────────(E[y|W3,W0,W1,W2])
d[v₀ v₁]
Estimand assumption 1, Unconfoundedness: If U→{v0,v1} and U→y then P(y|v0,v1,W3,W0,W1,W2,U) = P(y|v0,v1,W3,W0,W1,W2)
## Realized estimand
b: y~v0+v1+W3+W0+W1+W2+v0*X1+v0*X0+v1*X1+v1*X0
Target units: ate
## Estimate
Mean value: 86.98215448607412
You can estimate conditional effects, based on effect modifiers.
[7]:
linear_estimate = model.estimate_effect(identified_estimand,
method_name="backdoor.linear_regression",
control_value=(0,0),
treatment_value=(1,1))
print(linear_estimate)
*** Causal Estimate ***
## Identified estimand
Estimand type: EstimandType.NONPARAMETRIC_ATE
### Estimand : 1
Estimand name: backdoor
Estimand expression:
d
─────────(E[y|W3,W0,W1,W2])
d[v₀ v₁]
Estimand assumption 1, Unconfoundedness: If U→{v0,v1} and U→y then P(y|v0,v1,W3,W0,W1,W2,U) = P(y|v0,v1,W3,W0,W1,W2)
## Realized estimand
b: y~v0+v1+W3+W0+W1+W2+v0*X1+v0*X0+v1*X1+v1*X0
Target units:
## Estimate
Mean value: 86.98215448607412
### Conditional Estimates
__categorical__X1 __categorical__X0
(-3.7729999999999997, -0.505] (-3.516, -0.208] -130.785325
(-0.208, 0.383] -100.775154
(0.383, 0.879] -80.757119
(0.879, 1.483] -59.751641
(1.483, 4.222] -28.185272
(-0.505, 0.092] (-3.516, -0.208] -27.403113
(-0.208, 0.383] 6.185364
(0.383, 0.879] 22.802503
(0.879, 1.483] 43.753686
(1.483, 4.222] 76.549133
(0.092, 0.597] (-3.516, -0.208] 38.508392
(-0.208, 0.383] 69.321297
(0.383, 0.879] 86.952838
(0.879, 1.483] 107.354818
(1.483, 4.222] 137.803860
(0.597, 1.181] (-3.516, -0.208] 98.693127
(-0.208, 0.383] 130.652155
(0.383, 0.879] 151.174073
(0.879, 1.483] 168.625077
(1.483, 4.222] 202.541129
(1.181, 4.31] (-3.516, -0.208] 203.499012
(-0.208, 0.383] 234.643790
(0.383, 0.879] 248.317746
(0.879, 1.483] 272.567389
(1.483, 4.222] 302.347991
dtype: float64
More methods#
You can also use methods from EconML or CausalML libraries that support multiple treatments. You can look at examples from the conditional effect notebook: https://py-why.github.io/dowhy/example_notebooks/dowhy-conditional-treatment-effects.html
Propensity-based methods do not support multiple treatments currently.