{ "cells": [ { "cell_type": "markdown", "id": "777695bc", "metadata": {}, "source": [ "# Counterfactual Fairness\n", "\n", "This post introduces and replicates counterfactual fairness as proposed by Kusner et al. (2018) using DoWhy.\n", "\n", "Counterfactual fairness serves as an individual-level measure of causal fairness, capturing the notion that an estimator's decision is deemed fair for an individual if it remains consistent both in (a) the actual world and (b) a counterfactual world where the individual is associated with a different demographic group.\n", "\n", "### When to apply counterfactual fairness?\n", "1. To assess whether a prediction is individually causally fair, i.e., if a prediction is fair for a given individual `i`.\n", "\n", "### What is required to estimate counterfactual fairness?\n", "1. Dataset with protected attributes or proxy variables.\n", "2. A Structural Causal Model (SCM) : discovered either using a SCM discovery algorithm or through expert driven caual DAG creation.\n", "\n", "##### Notation\n", "\n", "- A: Set of protected attributes of an individual, representing variables that must not be subject to discrimination.\n", "- a: Actual value taken by the protected attribute in the real world.\n", "- a': Counterfactual (/flipped) value for the protected attribute.\n", "- X: Other observable attributes of any particular individual.\n", "- U: The set of relevant latent attributes that are not observed.\n", "- Y: The outcome to be predicted, which might be contaminated with historical biases.\n", "- $\\hat{Y}$: The predictor, a random variable dependent on A, X, and U, produced by a machine learning algorithm as a prediction of Y.\n", "\n", "Following Pearl, a structural causal model M is defined as a 4-tuple (U, V, F, P(u)), which can be represented using a directed acyclic graph (DAG) where:\n", "\n", "- U: Set of exogenous (unobserved) variables determined by factors outside of the model.\n", "- V: Set {V1 ... Vn} of endogenous (observed) variables completely determined by variables in the model (both U and V). Note: V includes both features X and output Y.\n", "- F: Set of structural equations {f1 ... fn}, where each fi is a process by which Vi is assigned a value fi(v,u) in response to the current (relevant) values of U & V.\n", "- P(u): (Prior) distribution over U.\n", "- do(Zi = z): (Do) intervention (Pearl 2000, Ch. 3), representing a manipulation of M where the values of the chosen intervention variables Z (a subset of V) are set to a constant value z, regardless of how the values are ordinarily generated by the DAG. This captures the idea of an agent, external to the system, modifying the system by forcefully assigning value z to Zi (for example, as in a randomized experiment). In the fairness literature, Z often comprises protected attributes like race, gender, etc.\n", "\n", "M is causal because given P(U), following a do intervention on a subset Z, we can derive the distribution over the remaining, non-intervened variables in V.\n" ] }, { "cell_type": "code", "execution_count": null, "id": "46026b9b", "metadata": {}, "outputs": [], "source": [ "import warnings\n", "from collections import namedtuple\n", "from typing import Any, Callable, Dict, List, Tuple, Union\n", "import matplotlib.pyplot as plt\n", "import numpy as np\n", "import pandas as pd\n", "from scipy import stats\n", "import sklearn\n", "from sklearn.neighbors import KernelDensity\n", "from sklearn.mixture import GaussianMixture\n", "from sklearn.linear_model import LogisticRegression, LinearRegression\n", "from sklearn.preprocessing import LabelEncoder\n", "from sklearn.base import BaseEstimator\n", "\n", "import warnings\n", "import dowhy\n", "import matplotlib.pyplot as plt\n", "import dowhy.gcm as gcm\n", "import networkx as nx\n", "\n", "from sklearn import datasets, metrics\n", "from typing import List, Any, Union\n", "\n", "import matplotlib.pyplot as plt\n", "import pandas as pd\n", "from sklearn.base import BaseEstimator\n", "\n", "\n", "def analyse_counterfactual_fairness(\n", " df: pd.DataFrame,\n", " estimator: Union[BaseEstimator],\n", " protected_attrs: List[str],\n", " dag: List[Tuple],\n", " X: List[str],\n", " target: str,\n", " disadvantage_group: dict,\n", " intersectional: bool = False,\n", " return_cache: bool = False,\n", ") -> Union[float, Tuple[float, pd.DataFrame, pd.DataFrame]]:\n", " \"\"\"\n", " Calculates Counterfactual Fairness following Kusner et al. (2018)\n", " Reference - https://arxiv.org/pdf/1703.06856.pdf\n", "\n", " Args:\n", " df (pd.DataFrame): Pandas DataFrame containing non-factorized/dummified versions of categorical\n", " variables, the predicted ylabel, and other variables consumed by the predictive model.\n", " estimator (Union[BaseEstimator]): Predictive model to be used for generating the output.\n", " protected_attrs (List[str]): List of protected attributes in the dataset.\n", " dag (List[Tuple]): List of tuples representing the Directed Acyclic Graph (DAG) structure.\n", " X (List[str]): List of features to be used by the estimator.\n", " target (str): Name of the target variable in df.\n", " disadvantage_group (dict): Dictionary specifying the disadvantaged group for each protected attribute.\n", " intersectional (bool, optional): If True, considers intersectional fairness. Defaults to False.\n", " return_cache (bool, optional): If True, returns the counterfactual values with observed and\n", " counterfactual protected attribute interventions for each row in df. Defaults to False.\n", "\n", " Returns:\n", " counterfactual_fairness (Union[float, Tuple[float, pd.DataFrame, pd.DataFrame]]):\n", " - If return_cache is False, returns the calculated counterfactual fairness as a float.\n", " - If return_cache is True, returns a tuple containing counterfactual fairness as a float,\n", " DataFrame df_obs with observed counterfactual values, and DataFrame df_cf with perturbered counterfactual values.\n", " \"\"\"\n", "\n", " invt_local_causal_model = gcm.InvertibleStructuralCausalModel(nx.DiGraph(dag))\n", " gcm.auto.assign_causal_mechanisms(invt_local_causal_model, df)\n", "\n", " gcm.fit(invt_local_causal_model, df)\n", "\n", " df_cf = pd.DataFrame()\n", " df_obs = pd.DataFrame()\n", "\n", " do_val_observed = {protected_attr: \"observed\" for protected_attr in protected_attrs}\n", " do_val_counterfact = {protected_attr: \"cf\" for protected_attr in protected_attrs}\n", "\n", " for idx, row in df.iterrows():\n", "\n", " do_val_obs = {}\n", " for protected_attr, intervention_type in do_val_observed.items():\n", " intervention_val = float(\n", " row[protected_attr]\n", " if intervention_type == \"observed\"\n", " else 1 - float(row[protected_attr])\n", " )\n", " do_val_obs[protected_attr] = _wrapper_lambda_fn(intervention_val)\n", "\n", " do_val_cf = {}\n", " for protected_attr, intervention_type in do_val_counterfact.items():\n", " intervention_val = float(\n", " float(row[protected_attr])\n", " if intervention_type == \"observed\"\n", " else 1 - float(row[protected_attr])\n", " )\n", " do_val_cf[protected_attr] = _wrapper_lambda_fn(intervention_val)\n", "\n", " counterfactual_samples_obs = gcm.counterfactual_samples(\n", " invt_local_causal_model, do_val_obs, observed_data=pd.DataFrame(row).T\n", " )\n", "\n", " counterfactual_samples_cf = gcm.counterfactual_samples(\n", " invt_local_causal_model, do_val_cf, observed_data=pd.DataFrame(row).T\n", " )\n", "\n", " df_cf = pd.concat([df_cf, counterfactual_samples_cf])\n", " df_obs = pd.concat([df_obs, counterfactual_samples_obs])\n", "\n", " df_cf = df_cf.reset_index(drop=True)\n", " df_obs = df_obs.reset_index(drop=True)\n", "\n", " if hasattr(estimator, \"predict_proba\"):\n", " # 1. Samples from the causal model based on the observed race\n", " lr_observed = estimator()\n", " lr_observed.fit(df_obs[X].astype(float), df[target])\n", " df_obs[f\"preds\"] = lr_observed.predict_proba(df_obs[X].astype(float))[:, 1]\n", "\n", " # 2. Samples from the causal model based on the counterfactual race\n", " lr_cf = estimator()\n", " lr_cf.fit(df_cf[X].astype(float), df[target])\n", " df_cf[f\"preds_cf\"] = lr_cf.predict_proba(df_cf[X].astype(float))[:, 1]\n", "\n", " else:\n", " # 1. Samples from the causal model based on the observed race\n", " lr_observed = estimator()\n", " lr_observed.fit(df_obs[X].astype(float), df[target])\n", " df_obs[f\"preds\"] = lr_observed.predict(df_obs[X].astype(float))\n", "\n", " # 2. Samples from the causal model based on the counterfactual race\n", " lr_cf = estimator()\n", " lr_cf.fit(df_cf[X].astype(float), df[target])\n", " df_cf[f\"preds_cf\"] = lr_cf.predict(df_cf[X].astype(float))\n", "\n", " query = \" and \".join(\n", " f\"{protected_attr} == {disadvantage_group[protected_attr]}\"\n", " for protected_attr in protected_attrs\n", " )\n", " mask = df.query(query).index.tolist()\n", " counterfactual_fairness = (\n", " df_obs.loc[mask][\"preds\"].mean() - df_cf.loc[mask][\"preds_cf\"].mean()\n", " )\n", "\n", " if not return_cache:\n", " return counterfactual_fairness\n", " else:\n", " return counterfactual_fairness, df_obs, df_cf\n", "\n", "def plot_counterfactual_fairness(\n", " df_obs: pd.DataFrame,\n", " df_cf: pd.DataFrame,\n", " mask: pd.Series,\n", " counterfactual_fairness: Union[int, float],\n", " legend_observed: str,\n", " legend_counterfactual: str,\n", " target: str,\n", " title: str,\n", ") -> None:\n", " \"\"\"\n", " Plots counterfactual fairness comparing observed and counterfactual samples.\n", "\n", " Args:\n", " df_obs (pd.DataFrame): DataFrame containing observed samples.\n", " df_cf (pd.DataFrame): DataFrame containing counterfactual samples.\n", " mask (pd.Series): Boolean mask for selecting specific samples from the DataFrames.\n", " counterfactual_fairness (Union[int, float]): The counterfactual fairness metric.\n", " legend_observed (str): Legend label for the observed samples.\n", " legend_counterfactual (str): Legend label for the counterfactual samples.\n", " target (str): Name of the target variable to be plotted on the x-axis.\n", " title (str): Title of the plot.\n", "\n", " Returns:\n", " None: The function displays the plot.\n", " \"\"\"\n", "\n", " fig, ax = plt.subplots(figsize=(8, 5), nrows=1, ncols=1)\n", "\n", " ax.hist(\n", " df_obs[f\"preds\"][mask], bins=50, alpha=0.7, label=legend_observed, color=\"blue\"\n", " )\n", " ax.hist(\n", " df_cf[f\"preds_cf\"][mask], bins=50, alpha=0.7, label=legend_counterfactual, color=\"orange\"\n", " )\n", "\n", " ax.set_xlabel(target)\n", " ax.legend()\n", " ax.set_title(title)\n", "\n", " fig.suptitle(f\"Counterfactual Fairness {round(counterfactual_fairness, 3)}\")\n", "\n", " plt.tight_layout()\n", " plt.show()\n", "\n", "\n", "def _wrapper_lambda_fn(val):\n", " return lambda x: val" ] }, { "cell_type": "markdown", "id": "5a5404b6", "metadata": {}, "source": [ "# 1. Load and Clean the Dataset" ] }, { "cell_type": "markdown", "id": "bb243d55", "metadata": {}, "source": [ "Kusner et al. (2018) use a survey, conducted by the Law School Admission Council, spanning 163 law schools in the United States, gathering data from 21,790 law students. The dataset used within this case study was originally collected for a study called ['LSAC National Longitudinal Bar Passage Study. LSAC Research Report Series'](https://eric.ed.gov/?id=ED469370) by Linda Wightman in 1998. The survey includes the following details:\n", "- entrance exam scores (LSAT)\n", "- pre-law school grade-point average (GPA)\n", "- average grade in the first year (FYA).\n", "\n", "It also includes protected attributes like:\n", "- Race\n", "- Sex\n", "\n", "For the purpose of this example, we will focus on the difference in outcomes only between the White and Black sub-groups and limit the dataset to a random sample of 5000 individuals:" ] }, { "cell_type": "code", "execution_count": null, "id": "37212ae1", "metadata": {}, "outputs": [], "source": [ "df = pd.read_csv(\"datasets/law_data.csv\")\n", "\n", "df[\"Gender\"] = df[\"sex\"].map({2: 0, 1: 1}).astype(str)\n", "df[\"Race\"] = df[\"race\"].map({\"White\": 0, \"Black\": 1}).astype(str)\n", "\n", "df = (\n", " df.query(\"race=='White' or race=='Black'\")\n", " .rename(columns={\"UGPA\": \"GPA\", \"LSAT\": \"LSAT\", \"ZFYA\": \"avg_grade\"})[\n", " [\"Race\", \"Gender\", \"GPA\", \"LSAT\", \"avg_grade\"]\n", " ]\n", " .reset_index(drop=True)\n", ")\n", "\n", "df_sample = df.astype(float).sample(5000).reset_index(drop=True)\n", "\n", "df_sample.head()" ] }, { "cell_type": "markdown", "id": "8a5db5c9", "metadata": {}, "source": [ "Given this data, a school may wish to predict if an applicant will have a high FYA. The school would also like to make sure these predictions are not biased by an individual’s race and sex. However, the LSAT, GPA, and FYA scores, may be biased due to social factors. So how can we determine the degree to which such a predictie model may be biased for a particular individual? Using Counterfactual Fairness." ] }, { "cell_type": "markdown", "id": "d5887085", "metadata": {}, "source": [ "# 2. A Formal Definition of Counterfactual Fairness\n", " \n", "Counterfactual fairness requires that, for each person in the population, the predicted value remain the same even if that person had different protected attributes in a causal sense. More formally, $\\hat{Y}$ is counterfactually fair if under any context X = x and A = a:\n", "\n", "$$P(\\hat{Y}_{a}(U) = y | X = x, A = a) = P(\\hat{Y}_{a'}(U) = y | X = x, A = a)$$\n", "\n", "for all y and for any value a' attainable by A. This concept is closely connected to actual causes or token causality. In essence, for fairness, A should not be a direct cause of $\\hat{Y}$ in any specific instance i.e. altering A while keeping non-causally dependent factors constant should not alter the distribution of $\\hat{Y}$. For an individual i, the difference between Y generated from various counterfactual worlds can be understood as a measure of similarity.\n", "\n", "\n", "### 2.1 Measuring Counterfactual Fairness\n", "\n", "In a SCM M, the state of any observable variable (Vi) is fully determined by the background variables (U) and structural equations (F). Thus, given a fully-specified set of equations, using an SCM we can construct counterfactuals. That is\n", "\n", " \"we can compute what (the distribution of) any of the variables would have been had certain other variables been different, other things being equal. For instance, given the causal model we can ask “Would individual i have graduated (Y = 1) if they hadn’t had a job?”, even if they did not actually graduate in the dataset.\" - (Russell et. al. , 2017)\n", "\n", "Given a SCM M and evidence E (subset of V), counterfactuals are constructed (i.e. inferred) in three steps:\n", "\n", "1. ***Abduction***: i.e. using M, adjusting noise variables to be consistent with the observed evidence E. More formally, given E and a prior distribution P(U), compute the values of the set of unobserved variables U given M. For non-deterministic models (as is the case for most causal models in the literature), compute the posterior distribution P(U|E=e).\n", "\n", "2. ***Action***: Perform do-intervention on Z (i.e. do(Zi = z)), resulting in the intervened SCM model M'.\n", "\n", "3. ***Prediction***: Using the intervened model M' and P(U|E=e), compute the counterfactual value of V (or P(V |E=e)).\n", "\n", "\n", "# 3. Measuring Counterfactual Fairness using DoWhy\n", "\n", "\n", "Algorithmically, to empirically test whether a model is counterfactually fair:\n", "\n", "- **Step 1 - Define a causal model** based on a causal DAG\n", "- **Step 2 - Generate counterfactual samples**: Using ```gcm.counterfactual_samples``` , generate two sets of samples from the model: \n", " a. one using the observed values of the protected attributes (`df_obs`) \n", " b. one using counterfactual values of the protected attributes (`df_cf`) \n", "- **Step 3 - Fit estimators using sampled data**: Fit models to both the original and counterfactual sampled data and plot the distribution of the predicted target generated by the two models. *If the distributions overlap, the estimator is counterfactually fair else not.*\n", "\n", "Given a dataset with protected / proxy attributes and a causal DAG , we can use the ```analyse_counterfactual_fairness``` function to measure counterfactual fairness at both the individual and aggregate level. In this example, we create the causal DAG based on the causal DAG provided in Kusner et al. (2018)." ] }, { "cell_type": "code", "execution_count": null, "id": "2280c2df", "metadata": {}, "outputs": [], "source": [ "dag = [\n", " (\"Race\", \"GPA\"),\n", " (\"Race\", \"LSAT\"),\n", " (\"Race\", \"avg_grade\"),\n", " (\"Gender\", \"GPA\"),\n", " (\"Gender\", \"LSAT\"),\n", " (\"Gender\", \"avg_grade\"),\n", " (\"GPA\", \"avg_grade\"),\n", " (\"LSAT\", \"avg_grade\"),\n", "]" ] }, { "cell_type": "markdown", "id": "412425b6", "metadata": {}, "source": [ "The ```analyse_counterfactual_fairness``` method also accepts as inputs:\n", "- the name of the target variable (`target` here avg_grade)\n", "- the input dataset (`df`)\n", "- an unfitted sklearn estimator (`estimator`; here LinearRegression)-\n", "- list of protected attributes (`protected_attrs`)\n", "- the list of input feature names (`X`)\n", "- a dictionary specifying the unique identifying label of the disadvantaged group for each protected group (`disadvantage_group`). " ] }, { "cell_type": "code", "execution_count": null, "id": "d70d0187", "metadata": {}, "outputs": [], "source": [ "target = \"avg_grade\"\n", "disadvantage_group = {\"Race\": 1}\n", "protected_attrs = [\"Race\"]\n", "features = [\"GPA\", \"LSAT\"]" ] }, { "cell_type": "markdown", "id": "8781ea06", "metadata": {}, "source": [ "### 3.1 Univariate Analysis\n", "\n", "Now, we are ready to call the method ```analyse_counterfactual_fairness``` to carry out counterfactual fairness analysis along the Race dimension: " ] }, { "cell_type": "code", "execution_count": null, "id": "6fdaa7be", "metadata": {}, "outputs": [], "source": [ "config = {\n", " \"df\": df_sample,\n", " \"dag\": dag,\n", " \"estimator\": LinearRegression,\n", " \"protected_attrs\": protected_attrs,\n", " \"X\": features,\n", " \"target\": target,\n", " \"disadvantage_group\": disadvantage_group,\n", " \"return_cache\": True,\n", "}\n", "\n", "counterfactual_fairness, df_obs, df_cf = analyse_counterfactual_fairness(**config)\n", "counterfactual_fairness" ] }, { "cell_type": "markdown", "id": "154d251d", "metadata": {}, "source": [ "`df_obs` contains the predicted values for each individual given the observed value of their protected attribute in the real world." ] }, { "cell_type": "code", "execution_count": null, "id": "79c2a233", "metadata": {}, "outputs": [], "source": [ "df_obs.head()" ] }, { "cell_type": "markdown", "id": "b68a6cc2", "metadata": {}, "source": [ "`df_cf` contains the predicted values for each individual given the counterfactual value of their protected attribute in the real world. Here, since the only variable we are intervening on is Race, we see that each individual's Race has been changed from 0 to 1." ] }, { "cell_type": "code", "execution_count": null, "id": "56d88825", "metadata": {}, "outputs": [], "source": [ "df_cf.head()" ] }, { "cell_type": "code", "execution_count": null, "id": "a1176177", "metadata": {}, "outputs": [], "source": [ "plot_counterfactual_fairness(\n", " df_obs=df_obs,\n", " df_cf=df_cf,\n", " mask=(df_sample[\"Race\"] == 1).values,\n", " counterfactual_fairness=counterfactual_fairness,\n", " legend_observed=\"Observed Samples (Race=Black)\",\n", " legend_counterfactual=\"Counterfactual Samples (Race=White)\",\n", " target=target,\n", " title=\"Black -> White\",\n", ")" ] }, { "cell_type": "markdown", "id": "145c5d84", "metadata": {}, "source": [ "Examining the results for the example at hand in Figure 1, we see that the observed and counterfactual distributions don’t overlap. We see that changing the race of the black subgroup to the white subgroup shifts the distribution of $\\hat{Y}$ to the right i.e. increases the avg_grade by ~0.50 on average. Thus, the fitted estimator can concluded to be not counterfactually fair.\n", "\n", "### 3.2 Intersectional Analysis\n", "\n", "We can further extend the analysis intersectionally, by examining the implication of multipleprotected attriutes on counterfactual fairness together. Here, we use the two protected attributes available to us - `[\"Race\",\"Gender\"]` - to carry out intersectional counterfactual fairness analysis to determine how counterfactually fair the estimator is for Black Females: " ] }, { "cell_type": "code", "execution_count": null, "id": "725bf447", "metadata": {}, "outputs": [], "source": [ "disadvantage_group = {\"Race\": 1, \"Gender\": 1}\n", "config = {\n", " \"df\": df_sample.astype(float).reset_index(drop=True),\n", " \"estimator\": LinearRegression,\n", " \"protected_attrs\": [\"Race\", \"Gender\"],\n", " \"dag\": dag,\n", " \"X\": features,\n", " \"target\": \"avg_grade\",\n", " \"return_cache\": True,\n", " \"disadvantage_group\": disadvantage_group,\n", " \"intersectional\": True,\n", "}\n", "\n", "counterfactual_fairness, df_obs, df_cf = analyse_counterfactual_fairness(**config)\n", "counterfactual_fairness" ] }, { "cell_type": "code", "execution_count": null, "id": "ad7671e5", "metadata": {}, "outputs": [], "source": [ "plot_counterfactual_fairness(\n", " df_obs=df_obs,\n", " df_cf=df_cf,\n", " mask=((df_sample[\"Race\"] == 1).values & (df_sample[\"Gender\"] == 1).values),\n", " counterfactual_fairness=counterfactual_fairness,\n", " legend_observed=\"Observed Samples (Race=Black, Gender=Female)\",\n", " legend_counterfactual=\"Counterfactual Samples (Race=White, Gender=Male)\",\n", " target=target,\n", " title=\"(Black, Female) -> (White, Male)\",\n", ")" ] }, { "cell_type": "markdown", "id": "ba44a873", "metadata": {}, "source": [ "Examining the results of the intersectional analysis in Figure 2, we see that the observed and counterfactual distributions don’t overlap at all. Changing the race and gender of the black females to the white males shifts the distribution of $\\hat{Y}$ for the black,female sub-group to the right i.e. increases the avg_grade by ~0.50 on average. Thus, the fitted estimator can concluded to be not counterfactually fair intersectionally." ] }, { "cell_type": "markdown", "id": "cf2f1ac8", "metadata": {}, "source": [ "### 3.3 Some Additional Uses of the Counterfactuals `df_obs` , `df_cf` for Fairness\n", "\n", "- Counterfactual values of Y (constructed using a set of fair causal models) can be used as a fair target to train the model with, in the presence of historically biased labels Y.\n", "- Train an estimator to be counterfactually fair by using an optimization routine that applies a penalty for a given individual i in proportion to the (average) difference in Y across counterfactual worlds for that individual i. For instance, if for an individual i the outcome Y is very different across counterfactual worlds, then that sample i will increase the loss by a proportionately higher amount. Conversely, if for an individual j the outcome Y is similar across counterfactual worlds, then that sample i will increase the loss by a proportionately lower amount (See Russell, Chris et al., 2017 for more details).\n", "\n", "\n", "# 4. Limitation of Counterfactual Fairness\n", "\n", "There maybe disagreements about the \"correct\" causal model due to:\n", "\n", "- Changing the structure of the DAG, e.g. adding an edge\n", "- Changing the latent variables, e.g. changing the function generating a node to have a different signal vs. noise decomposition\n", "- Preventing certain paths from propagating counterfactual values\n", "\n", "The literature suggests achieving counterfactual fairness under multiple competing casual models as a solution to the above. Russell et. al. , (2017) put forward one such solution called the \"Multi-World Fairness Algorithm\".\n", "\n", "\n", "# References\n", "\n", "- Kusner, Matt et al. Counterfactual Fairness. 2018, https://arxiv.org/pdf/1703.06856.pdf\n", "- Russell, Chris et al. When Worlds Collide: Integrating Different Counterfactual Assumptions In Fairness. 2017, https://proceedings.neurips.cc/paper/2017/file/1271a7029c9df08643b631b02cf9e116-Paper.pdf\n", "\n", "# Appendix : Fairness Through Unawareness (FTU) creates A Counterfactually Unfair Estimator\n", "\n", "To demonstrate that **\"Aware\" linear regression is always counterfactually fair but FTU makes it counterfactually unfair**, we build one \"aware\" linear regression and copare it with the \"unaware\" linear regression constructed Fig.1." ] }, { "cell_type": "code", "execution_count": null, "id": "c6baef0b", "metadata": {}, "outputs": [], "source": [ "config = {\n", " \"df\": df_sample,\n", " \"estimator\": LinearRegression,\n", " \"protected_attrs\": [\"Race\"],\n", " \"dag\": dag,\n", " \"X\": [\"GPA\", \"LSAT\", \"Race\", \"Gender\"],\n", " \"target\": \"avg_grade\",\n", " \"return_cache\": True,\n", " \"disadvantage_group\": disadvantage_group,\n", "}\n", "\n", "counterfactual_fairness_aware, df_obs_aware, df_cf_aware = (\n", " analyse_counterfactual_fairness(**config)\n", ")\n", "counterfactual_fairness_aware" ] }, { "cell_type": "code", "execution_count": null, "id": "84b1c218", "metadata": {}, "outputs": [], "source": [ "plot_counterfactual_fairness(\n", " df_obs=df_obs_aware,\n", " df_cf=df_cf_aware,\n", " mask=(df_sample[\"Race\"] == 1).values,\n", " counterfactual_fairness=counterfactual_fairness_aware,\n", " legend_observed=\"Observed Samples (Race=Black)\",\n", " legend_counterfactual=\"Counterfactual Samples (Race=White)\",\n", " target=target,\n", " title=\"Black -> White\",\n", ")" ] }, { "cell_type": "markdown", "id": "578ea0c1", "metadata": {}, "source": [ "Comparing Figure 1 and Figure 3, the comparitive plot of the observed counterfactual samples `df_obs` and perturbed counterfactual samples `df_cf`, shows that:\n", "\n", "1. for the \"aware\" linear regression in figure 3, the two distributions overlap. Thus, the estimator is counterfactually fair. \n", "2. for the \"unaware\" linear regression in figure 1, the two distributions are quite distinct and do not overlap, suggesting that the estimator is counterfactually unfair i.e. regressing avg_grade on *only* GPA and LSAT makes the estimator counterfactually unfair.\n", "\n", "Notable,it can formally be shown that, in general \"Regressing Y on X alone obeys the FTU criterion but is not counterfactually fair, so omitting A (FTU) may introduce unfairness into an otherwise fair world\". (Kusner et al. 2018)" ] } ], "metadata": { "kernelspec": { "display_name": "Python 3 (ipykernel)", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.9.18" } }, "nbformat": 4, "nbformat_minor": 5 }