Local average treatment effect: Difference between revisions
Aurodea108 (talk | contribs) Fixed citation error due to Imbens 467 being defined twice. |
m fix spacing around math (via WP:JWB) |
||
(29 intermediate revisions by 24 users not shown) | |||
Line 1: | Line 1: | ||
{{Short description|Econometric effect}} |
|||
{{tone|date=December 2018}} |
|||
In [[econometrics]] and related fields, the '''local average treatment effect''' ('''LATE'''), also known as the complier average causal effect (CACE), is the [[Rubin causal model|effect of a treatment]] for subjects who comply with the treatment assigned to their sample group. It is not to be confused with the [[average treatment effect]] (ATE), which includes compliers and non-compliers together. |
In [[econometrics]] and related empirical fields, the '''local average treatment effect''' ('''LATE'''), also known as the complier average causal effect (CACE), is the [[Rubin causal model|effect of a treatment]] for subjects who comply with the experimental treatment assigned to their sample group. It is not to be confused with the [[average treatment effect]] (ATE), which includes compliers and non-compliers together. Compliance refers to the human-subject response to a proposed experimental treatment condition. Similar to the ATE, the LATE is calculated but does not include non-compliant parties. If the goal is to evaluate the effect of a treatment in ideal, compliant subjects, the LATE value will give a more precise estimate. However, it may lack [[external validity]] by ignoring the effect of non-compliance that is likely to occur in the real-world deployment of a treatment method. The LATE can be estimated by a ratio of the estimated [[Intention-to-treat analysis|intent-to-treat]] effect and the estimated proportion of compliers, or alternatively through an [[Instrumental variables estimation|instrumental variable]] estimator. |
||
The LATE was first introduced by [[Guido Imbens|Guido W. Imbens]] and [[Joshua Angrist|Joshua D. Angrist]] in 1994, who shared one half of the [[2021 Nobel Memorial Prize in Economic Sciences]].<ref name="Imbens 467">{{Cite journal |last1=Imbens |first1=Guido W. |last2=Angrist |first2=Joshua D. |date=March 1994 |title=Identification and Estimation of Local Average Treatment Effects |url=http://www.nber.org/papers/t0118.pdf |journal=Econometrica |volume=62 |issue=2 |pages=467 |doi=10.2307/2951620 |issn=0012-9682 |jstor=2951620}}</ref><ref name=":2">{{Cite web |last=The Committee for the Prize in Economic Sciences in Memory of Alfred Nobel |date=2021-10-11 |title=Answering causal questions using observational data. Scientific Background on the Sveriges Riksbank Prize in Economic Sciences in Memory of Alfred Nobel 2021 |url=https://www.nobelprize.org/uploads/2021/10/advanced-economicsciencesprize2021.pdf}}</ref> As summarized by the Nobel Committee, the LATE framework "significantly altered how researchers approach empirical questions using data generated from either [[Natural experiment|natural experiments]] or [[Randomized experiment|randomized experiments]] with incomplete compliance to the assigned treatment. At the core, the LATE interpretation clarifies what can and cannot be learned from such experiments."<ref name=":2" /> |
The LATE was first introduced in the econometrics literature by [[Guido Imbens|Guido W. Imbens]] and [[Joshua Angrist|Joshua D. Angrist]] in 1994, who shared one half of the [[2021 Nobel Memorial Prize in Economic Sciences]].<ref name="Imbens 467">{{Cite journal |last1=Imbens |first1=Guido W. |last2=Angrist |first2=Joshua D. |date=March 1994 |title=Identification and Estimation of Local Average Treatment Effects |url=http://www.nber.org/papers/t0118.pdf |journal=Econometrica |volume=62 |issue=2 |pages=467 |doi=10.2307/2951620 |issn=0012-9682 |jstor=2951620}}</ref><ref name=":2">{{Cite web |last=The Committee for the Prize in Economic Sciences in Memory of Alfred Nobel |date=2021-10-11 |title=Answering causal questions using observational data. Scientific Background on the Sveriges Riksbank Prize in Economic Sciences in Memory of Alfred Nobel 2021 |url=https://www.nobelprize.org/uploads/2021/10/advanced-economicsciencesprize2021.pdf}}</ref> As summarized by the Nobel Committee, the LATE framework "significantly altered how researchers approach empirical questions using data generated from either [[Natural experiment|natural experiments]] or [[Randomized experiment|randomized experiments]] with incomplete compliance to the assigned treatment. At the core, the LATE interpretation clarifies what can and cannot be learned from such experiments."<ref name=":2" /> |
||
The phenomenon of non-compliant subjects (patients) is also known in [[medical research]].<ref>Moerbeek, M., & Schie, S. van. (2019). What are the statistical implications of treatment non‐compliance in cluster randomized trials: A simulation study. In Statistics in Medicine (Vol. 38, Issue 26, pp. 5071–5084). Wiley. https://doi.org/10.1002/sim.8351</ref> In the biostatistics literature, Baker and Lindeman (1994) independently developed the LATE method for a binary outcome with the paired availability design and the key monotonicity assumption.<ref>{{Cite journal |last1=Baker |first1=Stuart G. |last2=Lindeman |first2=Karen S. |date=1994-11-15 |title=The paired availability design: A proposal for evaluating epidural analgesia during labor |url=http://dx.doi.org/10.1002/sim.4780132108 |journal=Statistics in Medicine |volume=13 |issue=21 |pages=2269–2278 |doi=10.1002/sim.4780132108 |pmid=7846425 |issn=0277-6715}}</ref> Baker, Kramer, Lindeman (2016) summarized the history of its development.<ref name=":3">{{Cite journal |last1=Baker |first1=Stuart G. |last2=Kramer |first2=Barnett S. |last3=Lindeman |first3=Karen S. |date=2018-10-30 |title="Latent class instrumental variables: A clinical and biostatistical perspective" |journal=Statistics in Medicine |volume=38 |issue=5 |pages=901 |doi=10.1002/sim.6612 |issn=0277-6715 |pmid=30761594 |doi-access=free|pmc=4715605 }}</ref> Various papers called both Imbens and Angrist (1994) and Baker and Lindeman (1994) seminal.<ref>{{Cite journal |last1=Swanson |first1=Sonja A. |last2=Hernán |first2=Miguel A. |last3=Miller |first3=Matthew |last4=Robins |first4=James M. |last5=Richardson |first5=Thomas S. |date=2018-04-03 |title=Partial Identification of the Average Treatment Effect Using Instrumental Variables: Review of Methods for Binary Instruments, Treatments, and Outcomes |url=http://dx.doi.org/10.1080/01621459.2018.1434530 |journal=Journal of the American Statistical Association |volume=113 |issue=522 |pages=933–947 |doi=10.1080/01621459.2018.1434530 |pmid=31537952 |pmc=6752717 |issn=0162-1459}}</ref><ref>{{Cite journal |last1=Lee |first1=Kwonsang |last2=Lorch |first2=Scott A. |author3-link=Dylan S. Small |last3=Small |first3=Dylan S. |date=2019-02-20 |title=Sensitivity analyses for average treatment effects when outcome is censored by death in instrumental variable models |url=http://dx.doi.org/10.1002/sim.8117 |journal=Statistics in Medicine |volume=38 |issue=13 |pages=2303–2316 |doi=10.1002/sim.8117 |pmid=30785641 |issn=0277-6715|arxiv=1802.06711 |s2cid=73458979 }}</ref><ref>{{Cite journal |last=Sheng |first=E |title=Estimating causal effects of treatment in RCTs with provider and subject noncompliance. |journal=Statistics in Medicine |year=2019 |volume=38 |issue=5 |pages=738–750|doi=10.1002/sim.8012 |pmid=30347462 |s2cid=53035814 }}</ref><ref>{{Cite journal |last=Wang |first=L |title=Bounded, efficient and multiply robust estimation of average treatment effects using instrumental variable |journal=|year=2016 |arxiv=1611.09925v4 }}</ref> |
|||
An early version of LATE involved one-sided noncompliance (and hence no monotonicity assumption). In 1983 Baker wrote a technical report describing LATE for one-sided noncompliance that was published in 2016 in a supplement.<ref name=":3" /> In 1984, Bloom published a paper on LATE with one-sided compliance.<ref>{{Cite journal |last=Bloom |first=Howard S. |date=April 1984 |title=Accounting for No-Shows in Experimental Evaluation Designs |url=http://journals.sagepub.com/doi/10.1177/0193841X8400800205 |journal=Evaluation Review |language=en |volume=8 |issue=2 |pages=225–246 |doi=10.1177/0193841X8400800205 |issn=0193-841X}}</ref> For a history of multiple discoveries involving LATE see Baker and Lindeman (2024).<ref>{{Cite journal |last1=Baker |first1=Stuart G. |last2=Lindeman |first2=Karen S. |date=2024-04-02 |title=Multiple Discoveries in Causal Inference: LATE for the Party |journal=CHANCE |language=en |volume=37 |issue=2 |pages=21–25 |doi=10.1080/09332480.2024.2348956 |pmid=38957370 |pmc=11218811 |issn=0933-2480}}</ref> |
|||
== General definition == |
== General definition == |
||
Line 12: | Line 16: | ||
== Potential outcome framework == |
== Potential outcome framework == |
||
The treatment effect for subject <math> i </math> is <math> Y_i(1)-Y_i(0) </math>. It is impossible to simultaneously observe <math> Y_i(1) </math>and <math> Y_i(0) </math> for the same subject. At any given time, only a subject in its treated <math> Y_i(1) </math> or untreated <math> Y_i(0) </math> state can be observed. |
The <small>LATE</small> is defined within the potential outcomes framework of causal inference. The treatment effect for subject <math> i </math> is <math> Y_i(1)-Y_i(0) </math>. It is impossible to simultaneously observe <math> Y_i(1) </math> and <math> Y_i(0) </math> for the same subject. At any given time, only a subject in its treated <math> Y_i(1) </math> or untreated <math> Y_i(0) </math> state can be observed. |
||
Through random assignment, the expected untreated potential outcome of the control group is the same as that of the treatment group, and the expected treated potential outcome of the treatment group is the same as that of the control group. The random assignment assumption thus allows one to take the difference between the average outcome in the treatment group and the average outcome in the control group as the overall average treatment effect, such that: |
Through random assignment, the expected untreated potential outcome of the control group is the same as that of the treatment group, and the expected treated potential outcome of the treatment group is the same as that of the control group. The random assignment assumption thus allows one to take the difference between the average outcome in the treatment group and the average outcome in the control group as the overall average treatment effect, such that: |
||
Line 19: | Line 23: | ||
=== Non-compliance framework === |
=== Non-compliance framework === |
||
Researchers frequently encounter non-compliance problems in their experiments, whereby subjects fail to comply with their experimental assignments. In an experiment with non-compliance, the subjects can be divided into four subgroups: compliers, always-takers, never-takers and defiers. The term <math>d_i(z)</math> represents |
Researchers frequently encounter non-compliance problems in their experiments, whereby subjects fail to comply with their experimental assignments. In an experiment with non-compliance, the subjects can be divided into four subgroups: compliers, always-takers, never-takers and defiers. The term <math>d_i(z)</math> represents the treatment that subject <math> i </math> actually takes when their treatment assignment is <math>z_i</math>. |
||
Compliers are subjects who will take the treatment if and only if they were assigned to the treatment group, i.e. the subpopulation with <math>d_i(1)=1</math> and <math>d_i(0)=0</math>. |
Compliers are subjects who will take the treatment if and only if they were assigned to the treatment group, i.e., the subpopulation with <math>d_i(1)=1</math> and <math>d_i(0)=0</math>. |
||
Non-compliers are composed of the three remaining subgroups: |
Non-compliers are composed of the three remaining subgroups: |
||
* Always-takers are subjects who will always take the treatment even if they were assigned to the control group, i.e. the subpopulation with <math>d_i(z)=1</math> |
* Always-takers are subjects who will always take the treatment even if they were assigned to the control group, i.e., the subpopulation with <math>d_i(z)=1</math> |
||
* Never-takers are subjects who will never take the treatment even if they were assigned to the treatment group, i.e. the subpopulation with <math>d_i(z)=0</math> |
* Never-takers are subjects who will never take the treatment even if they were assigned to the treatment group, i.e., the subpopulation with <math>d_i(z)=0</math> |
||
* Defiers are subjects who will do the opposite of their treatment assignment status, i.e. the subpopulation with <math>d_i(1)=0</math> and <math>d_i(0)=1 |
* Defiers are subjects who will do the opposite of their treatment assignment status, i.e., the subpopulation with <math>d_i(1)=0</math> and <math>d_i(0)=1 |
||
</math> |
</math> |
||
Line 35: | Line 39: | ||
=== Assumptions under one-sided non-compliance === |
=== Assumptions under one-sided non-compliance === |
||
⚫ | The non-interference assumption, otherwise known as the Stable Unit Treatment Value Assumption (SUTVA), is composed of two parts.<ref>{{Cite journal|last=Rubin|first=Donald B.|date=January 1978|title=Bayesian Inference for Causal Effects: The Role of Randomization|journal=The Annals of Statistics|volume=6|issue=1|pages=34–58|doi=10.1214/aos/1176344064|issn=0090-5364|doi-access=free}}</ref> |
||
* The first part of this assumption stipulates that the actual treatment status, <math>d_i</math>, of subject <math>i</math> depends only on the subject's own treatment assignment status, <math>z_i</math>. The treatment assignment status of other subjects will not affect the treatment status of subject <math>i</math>. Formally, if <math>z_i=z_i'</math>, then <math>D_i(\mathbf{z})=D_i(\mathbf{z}')</math>, where <math>\mathbf{z}</math> denotes the vector of treatment assignment status for all individuals.<ref name=":0">{{Cite journal|last1=Angrist|first1=Joshua D.|last2=Imbens|first2=Guido W.|last3=Rubin|first3=Donald B.|date=June 1996|title=Identification of Causal Effects Using Instrumental Variables|journal=Journal of the American Statistical Association|volume=91|issue=434|pages=444–455|doi=10.1080/01621459.1996.10476902|issn=0162-1459|url=http://www.nber.org/papers/t0136.pdf}}</ref> |
|||
⚫ | |||
* The second part of this assumption stipulates that subject <math> i </math>'s potential outcomes are affected by its own treatment assignment, and the treatment it receives as a consequence of that assignment. The treatment assignment and treatment status of other subjects will not affect subject <math>i</math>'s outcomes. Formally, if <math>z_i=z_i'</math> and <math>d_i=d_i'</math>, then <math>Y_i(z,d)=Y_i(z',d)</math>. |
|||
⚫ | |||
** The second part of this assumption stipulates that subject <math> i </math>'s potential outcomes are affected by its own treatment assignment, and the treatment it receives as a consequence of that assignment. The treatment assignment and treatment status of other subjects will not affect subject <math>i</math>'s outcomes. Formally, if <math>z_i=z_i'</math> and <math>d_i=d_i'</math>, then <math>Y_i(z,d)=Y_i(z',d)</math>. |
|||
⚫ | The excludability assumption requires that potential outcomes respond to treatment itself, <math>d_i</math>, not treatment assignment, <math>z_i</math>. Formally <math> Y_i(z,d)=Y_i(d) </math>. So under this assumption, only <math>d</math> matters.<ref>{{Cite journal|last1=Imbens|first1=G. W.|last2=Rubin|first2=D. B.|date=1997-10-01|title=Estimating Outcome Distributions for Compliers in Instrumental Variables Models|journal=The Review of Economic Studies|volume=64|issue=4|pages=555–574|doi=10.2307/2971731|issn=0034-6527|jstor=2971731}}</ref> The plausibility of the excludability assumption must also be assessed on a case-by-case basis. |
||
⚫ | |||
⚫ | |||
=== Assumptions under two-sided non-compliance === |
=== Assumptions under two-sided non-compliance === |
||
* All of the above, and: |
* All of the above, and: |
||
* The monotonicity assumption, i.e. for each subject <math> i </math>, <math>d_i(1) \geq d_i(0)</math>. This states that if a subject were moved from the control to treatment group, <math> d_i </math> would either remain unchanged or increase. The monotonicity assumption rules out defiers, since their potential outcomes are characterized by <math>d_i(1) < d_i(0)</math>.<ref name="Imbens 467"/> Monotonicity cannot be tested, so like the non-interference and excludability assumptions, its validity must be determined on a case-by-case basis. |
* The monotonicity assumption, i.e., for each subject <math> i </math>, <math>d_i(1) \geq d_i(0)</math>. This states that if a subject were moved from the control to treatment group, <math> d_i </math> would either remain unchanged or increase. The monotonicity assumption rules out defiers, since their potential outcomes are characterized by <math>d_i(1) < d_i(0)</math>.<ref name="Imbens 467"/> Monotonicity cannot be tested, so like the non-interference and excludability assumptions, its validity must be determined on a case-by-case basis. |
||
== Identification == |
== Identification == |
||
Line 54: | Line 57: | ||
<math> ITT_D = E[d_i(z=1)]-E[d_i(z=0)]</math> |
<math> ITT_D = E[d_i(z=1)]-E[d_i(z=0)]</math> |
||
The <math>ITT </math> measures the average effect of experimental assignment on outcomes without accounting for the proportion of the group that was actually treated (i.e. an average of those assigned to treatment minus the average of those assigned to control). In experiments with full compliance, the <math>ITT = ATE </math>. |
The <math>ITT </math> measures the average effect of experimental assignment on outcomes without accounting for the proportion of the group that was actually treated (i.e., an average of those assigned to treatment minus the average of those assigned to control). In experiments with full compliance, the <math>ITT = ATE </math>. |
||
The <math>ITT_D </math>measures the proportion of subjects who are treated when they are assigned to the treatment group, minus the proportion who would have been treated even if they had been assigned to the control group, i.e. <math>ITT_D </math>= the share of compliers. |
The <math>ITT_D </math> measures the proportion of subjects who are treated when they are assigned to the treatment group, minus the proportion who would have been treated even if they had been assigned to the control group, i.e., <math>ITT_D </math>= the share of compliers. |
||
=== Proof === |
=== Proof === |
||
Under one-sided noncompliance |
Under one-sided noncompliance, all subjects assigned to control group will not take the treatment, therefore:<ref name=":0" /> <math> E[d_i(z=0)]=0</math>, |
||
so that <math> ITT_D = E[d_i(z=1)]=P[d_i(1)=1]</math> |
so that <math> ITT_D = E[d_i(z=1)]=P[d_i(1)=1]</math> |
||
Line 88: | Line 91: | ||
As such, |
As such, |
||
<math> \frac{ITT}{ITT_D}= \frac {E[Y_i(z=1,d=1)-Y_i(z=0,d=0)|d_i(1)=1]*P[d_i(1)=1]}{P[d_i(1)=1]} =E[Y_i(d=1)-Y_i(d=0)|d_i(1)=1]= LATE |
<math> \begin{align} \frac{ITT}{ITT_D}= & \frac {E[Y_i(z=1,d=1)-Y_i(z=0,d=0)|d_i(1)=1]*P[d_i(1)=1]}{P[d_i(1)=1]} |
||
\\ = & E[Y_i(d=1)-Y_i(d=0)|d_i(1)=1] |
|||
\\ = & LATE |
|||
\end{align} |
|||
</math> |
</math> |
||
Line 201: | Line 207: | ||
SEM is derived through the following equations: |
SEM is derived through the following equations: |
||
<math> |
<math>D_i = \alpha_0 + \alpha_1 Z_i + \xi_{1i}</math> |
||
<math>Y_i = \beta_0 + \beta_1 Z_i + \xi_{2i}</math> |
<math>Y_i = \beta_0 + \beta_1 Z_i + \xi_{2i}</math> |
||
The first equation captures the first stage effect of <math>z_i</math>on <math>d_i</math>, adjusting for variance, where |
The first equation captures the first stage effect of <math>z_i</math> on <math>d_i</math>, adjusting for variance, where |
||
<math>\alpha_1=Cov(D,Z)/var(Z)</math> |
<math>\alpha_1=Cov(D,Z)/var(Z)</math> |
||
Line 213: | Line 219: | ||
<math>\beta_1=Cov(Y,Z)/var(Z)</math> |
<math>\beta_1=Cov(Y,Z)/var(Z)</math> |
||
The covariate |
The covariate adjusted IV estimator is the ratio <math>\tau_{LATE}=\frac{\beta_1}{\alpha_1}=\frac{Cov(Y,Z)/Var(Z)}{Cov(D,Z)/Var(Z) } = \frac{Cov(Y,Z)}{Cov(D,Z)}</math> |
||
Similar to the nonzero compliance assumption, the coefficient <math>\alpha_1 </math> in first stage regression needs to be significant to make <math>z </math> a valid instrument. |
Similar to the nonzero compliance assumption, the coefficient <math>\alpha_1 </math> in first stage regression needs to be significant to make <math>z </math> a valid instrument. |
||
Line 232: | Line 238: | ||
By leveraging [[Instrumental variables estimation|instrumental variables]], Aronow and Carnegie (2013)<ref name=":0b" /> propose a new reweighting method called Inverse Compliance Score weighting (ICSW), with a similar intuition behind [[Inverse probability weighting|IPW]]. This method assumes compliance propensity is a pre-treatment covariate and compliers would have the same average treatment effect within their strata. ICSW first estimates the conditional probability of being a complier (Compliance Score) for each subject by [[Maximum likelihood estimation|Maximum Likelihood estimator]] given covariates control, then reweights each unit by its inverse of compliance score, so that compliers would have covariate distribution that matches the full population. ICSW is applicable at both [[Local Average Treatment Effect|one-sided]] and [[Local Average Treatment Effect|two-sided noncompliance]] situation. |
By leveraging [[Instrumental variables estimation|instrumental variables]], Aronow and Carnegie (2013)<ref name=":0b" /> propose a new reweighting method called Inverse Compliance Score weighting (ICSW), with a similar intuition behind [[Inverse probability weighting|IPW]]. This method assumes compliance propensity is a pre-treatment covariate and compliers would have the same average treatment effect within their strata. ICSW first estimates the conditional probability of being a complier (Compliance Score) for each subject by [[Maximum likelihood estimation|Maximum Likelihood estimator]] given covariates control, then reweights each unit by its inverse of compliance score, so that compliers would have covariate distribution that matches the full population. ICSW is applicable at both [[Local Average Treatment Effect|one-sided]] and [[Local Average Treatment Effect|two-sided noncompliance]] situation. |
||
Although one's compliance score cannot be directly observed, the probability of compliance can be estimated by observing the compliance condition from the same strata, in other words those that share the same covariate profile. The compliance score is treated as a latent pretreatment covariate, which is independent of treatment assignment <math>Z</math>. For each unit <math>i</math>, compliance score is denoted as <math display="inline">P_{Ci}=Pr(D_1>D_0|X=x_i)</math>, where <math>x_i</math>is the covariate vector for unit <math>i </math>. |
Although one's compliance score cannot be directly observed, the probability of compliance can be estimated by observing the compliance condition from the same strata, in other words those that share the same covariate profile. The compliance score is treated as a latent pretreatment covariate, which is independent of treatment assignment <math>Z</math>. For each unit <math>i</math>, compliance score is denoted as <math display="inline">P_{Ci}=Pr(D_1>D_0|X=x_i)</math>, where <math>x_i</math> is the covariate vector for unit <math>i </math>. |
||
In [[Local Average Treatment Effect|one-sided noncompliance]] case, the population consists of only compliers and never-takers. All units assigned to the treatment group that take the treatment will be compliers. Thus, a simple bivariate regression of ''D'' on ''X'' can predict the probability of compliance. |
In [[Local Average Treatment Effect|one-sided noncompliance]] case, the population consists of only compliers and never-takers. All units assigned to the treatment group that take the treatment will be compliers. Thus, a simple bivariate regression of ''D'' on ''X'' can predict the probability of compliance. |
||
Line 274: | Line 280: | ||
==== Reweighting under monotonicity assumption ==== |
==== Reweighting under monotonicity assumption ==== |
||
{{more citations needed|section|date=December 2018}} |
{{more citations needed|section|date=December 2018}} |
||
In another approach, one might assume that an underlying utility model links the never-takers, compliers, and always-takers. The ATE can be estimated by reweighting based on an extrapolation of the complier treated and untreated potential outcomes to the never-takers and always-takers. The following method is one that has been proposed by Amanda Kowalski.<ref name=":1" /> |
In another approach, one might assume that an underlying utility model links the never-takers, compliers, and always-takers. The ATE can be estimated by reweighting based on an extrapolation of the complier treated and untreated potential outcomes to the never-takers and always-takers. The following method is one that has been proposed by [[Amanda Kowalski]].<ref name=":1" /> |
||
First, all subjects are assumed to have a utility function, determined by their individual gains from treatment and costs from treatment. Based on an underlying assumption of monotonicity, the never-takers, compliers, and always-takers can be arranged on the same continuum based on their utility function. This assumes that the always-takers have such a high utility from taking the treatment that they will take it even without encouragement. On the other hand, the never-takers have such a low utility function that they will not take the treatment despite encouragement. Thus, the never-takers can be aligned with the compliers with the lowest utilities, and the always-takers with the compliers with the highest utility functions. |
First, all subjects are assumed to have a utility function, determined by their individual gains from treatment and costs from treatment. Based on an underlying assumption of monotonicity, the never-takers, compliers, and always-takers can be arranged on the same continuum based on their utility function. This assumes that the always-takers have such a high utility from taking the treatment that they will take it even without encouragement. On the other hand, the never-takers have such a low utility function that they will not take the treatment despite encouragement. Thus, the never-takers can be aligned with the compliers with the lowest utilities, and the always-takers with the compliers with the highest utility functions. |
Latest revision as of 14:20, 4 October 2024
In econometrics and related empirical fields, the local average treatment effect (LATE), also known as the complier average causal effect (CACE), is the effect of a treatment for subjects who comply with the experimental treatment assigned to their sample group. It is not to be confused with the average treatment effect (ATE), which includes compliers and non-compliers together. Compliance refers to the human-subject response to a proposed experimental treatment condition. Similar to the ATE, the LATE is calculated but does not include non-compliant parties. If the goal is to evaluate the effect of a treatment in ideal, compliant subjects, the LATE value will give a more precise estimate. However, it may lack external validity by ignoring the effect of non-compliance that is likely to occur in the real-world deployment of a treatment method. The LATE can be estimated by a ratio of the estimated intent-to-treat effect and the estimated proportion of compliers, or alternatively through an instrumental variable estimator.
The LATE was first introduced in the econometrics literature by Guido W. Imbens and Joshua D. Angrist in 1994, who shared one half of the 2021 Nobel Memorial Prize in Economic Sciences.[1][2] As summarized by the Nobel Committee, the LATE framework "significantly altered how researchers approach empirical questions using data generated from either natural experiments or randomized experiments with incomplete compliance to the assigned treatment. At the core, the LATE interpretation clarifies what can and cannot be learned from such experiments."[2]
The phenomenon of non-compliant subjects (patients) is also known in medical research.[3] In the biostatistics literature, Baker and Lindeman (1994) independently developed the LATE method for a binary outcome with the paired availability design and the key monotonicity assumption.[4] Baker, Kramer, Lindeman (2016) summarized the history of its development.[5] Various papers called both Imbens and Angrist (1994) and Baker and Lindeman (1994) seminal.[6][7][8][9]
An early version of LATE involved one-sided noncompliance (and hence no monotonicity assumption). In 1983 Baker wrote a technical report describing LATE for one-sided noncompliance that was published in 2016 in a supplement.[5] In 1984, Bloom published a paper on LATE with one-sided compliance.[10] For a history of multiple discoveries involving LATE see Baker and Lindeman (2024).[11]
General definition
[edit]The typical terminology of the Rubin causal model is used to measure the LATE, with units indexed and a binary treatment indicator, for unit . The term is used to denote the potential outcome of unit under treatment .
In an ideal experiment, all subjects assigned to the treatment will comply with the treatment, while those that are assigned to control will remain untreated. In reality, however, the compliance rate is often imperfect, which prevents researchers from identifying the ATE. In such cases, estimating the LATE becomes the more feasible option. The LATE is the average treatment effect among a specific subset of the subjects, who in this case would be the compliers.
Potential outcome framework
[edit]The LATE is defined within the potential outcomes framework of causal inference. The treatment effect for subject is . It is impossible to simultaneously observe and for the same subject. At any given time, only a subject in its treated or untreated state can be observed.
Through random assignment, the expected untreated potential outcome of the control group is the same as that of the treatment group, and the expected treated potential outcome of the treatment group is the same as that of the control group. The random assignment assumption thus allows one to take the difference between the average outcome in the treatment group and the average outcome in the control group as the overall average treatment effect, such that:
Non-compliance framework
[edit]Researchers frequently encounter non-compliance problems in their experiments, whereby subjects fail to comply with their experimental assignments. In an experiment with non-compliance, the subjects can be divided into four subgroups: compliers, always-takers, never-takers and defiers. The term represents the treatment that subject actually takes when their treatment assignment is .
Compliers are subjects who will take the treatment if and only if they were assigned to the treatment group, i.e., the subpopulation with and .
Non-compliers are composed of the three remaining subgroups:
- Always-takers are subjects who will always take the treatment even if they were assigned to the control group, i.e., the subpopulation with
- Never-takers are subjects who will never take the treatment even if they were assigned to the treatment group, i.e., the subpopulation with
- Defiers are subjects who will do the opposite of their treatment assignment status, i.e., the subpopulation with and
Non-compliance can take two forms: one-sided (always-takers and never-takers) and two-sided (defiers). In the case of one-sided non-compliance, a number of the subjects who were assigned to the treatment group remain untreated. Subjects are thus divided into compliers and never-takers, such that for all , while or . In the case of two-sided non-compliance, a number of the subjects assigned to the treatment group fail to receive the treatment, while a number of the subjects assigned to the control group receive the treatment. In this case, subjects are divided into the four subgroups, such that both and can be 0 or 1.
Given non-compliance, certain assumptions are required to estimate the LATE. Under one-sided non-compliance, non-interference and excludability is assumed. Under two-sided non-compliance, non-interference, excludability, and monotonicity is assumed.
Assumptions under one-sided non-compliance
[edit]The non-interference assumption, otherwise known as the Stable Unit Treatment Value Assumption (SUTVA), is composed of two parts.[12]
- The first part of this assumption stipulates that the actual treatment status, , of subject depends only on the subject's own treatment assignment status, . The treatment assignment status of other subjects will not affect the treatment status of subject . Formally, if , then , where denotes the vector of treatment assignment status for all individuals.[13]
- The second part of this assumption stipulates that subject 's potential outcomes are affected by its own treatment assignment, and the treatment it receives as a consequence of that assignment. The treatment assignment and treatment status of other subjects will not affect subject 's outcomes. Formally, if and , then .
- The plausibility of the non-interference assumption must be assessed on a case-by-case basis.
The excludability assumption requires that potential outcomes respond to treatment itself, , not treatment assignment, . Formally . So under this assumption, only matters.[14] The plausibility of the excludability assumption must also be assessed on a case-by-case basis.
Assumptions under two-sided non-compliance
[edit]- All of the above, and:
- The monotonicity assumption, i.e., for each subject , . This states that if a subject were moved from the control to treatment group, would either remain unchanged or increase. The monotonicity assumption rules out defiers, since their potential outcomes are characterized by .[1] Monotonicity cannot be tested, so like the non-interference and excludability assumptions, its validity must be determined on a case-by-case basis.
Identification
[edit]The , whereby
The measures the average effect of experimental assignment on outcomes without accounting for the proportion of the group that was actually treated (i.e., an average of those assigned to treatment minus the average of those assigned to control). In experiments with full compliance, the .
The measures the proportion of subjects who are treated when they are assigned to the treatment group, minus the proportion who would have been treated even if they had been assigned to the control group, i.e., = the share of compliers.
Proof
[edit]Under one-sided noncompliance, all subjects assigned to control group will not take the treatment, therefore:[13] ,
so that
If all subjects were assigned to treatment, the expected potential outcomes would be a weighted average of the treated potential outcomes among compliers, and the untreated potential outcomes among never-takers, such that
If all subjects were assigned to control, however, the expected potential outcomes would be a weighted average of the untreated potential outcomes among compliers and never-takers, such that
Through substitution, the ITT is expressed as a weighted average of the ITT among the two subpopulations (compliers and never-takers), such that
Given the exclusion and monotonicity assumption, the second half of this equation should be zero.
As such,
Application: hypothetical schedule of the potential outcome under two-sided noncompliance
[edit]The table below lays out the hypothetical schedule of potential outcomes under two-sided noncompliance.
The ATE is calculated by the average of
Observation | Type | |||||
---|---|---|---|---|---|---|
1 | 4 | 7 | 3 | 0 | 1 | Complier |
2 | 3 | 5 | 2 | 0 | 0 | Never-taker |
3 | 1 | 5 | 4 | 0 | 1 | Complier |
4 | 5 | 8 | 3 | 1 | 1 | Always-taker |
5 | 4 | 10 | 6 | 0 | 1 | Complier |
6 | 2 | 8 | 6 | 0 | 0 | Never-taker |
7 | 6 | 10 | 4 | 0 | 1 | Complier |
8 | 5 | 9 | 4 | 0 | 1 | Complier |
9 | 2 | 5 | 3 | 1 | 1 | Always-taker |
LATE is calculated by ATE among compliers, so
ITT is calculated by the average of ,
so
is the share of compliers
Others: LATE in instrumental variable framework
[edit]LATE can be thought of through an IV framework.[15] Treatment assignment is the instrument that drives the causal effect on outcome through the variable of interest , such that only influences through the endogenous variable , and through no other path. This would produce the treatment effect for compliers.
In addition to the potential outcomes framework mentioned above, LATE can also be estimated through the Structural Equation Modeling (SEM) framework, originally developed for econometric applications.
SEM is derived through the following equations:
The first equation captures the first stage effect of on , adjusting for variance, where
The second equation captures the reduced form effect of on ,
The covariate adjusted IV estimator is the ratio
Similar to the nonzero compliance assumption, the coefficient in first stage regression needs to be significant to make a valid instrument.
However, because of SEM’s strict assumption of constant effect on every individual, the potential outcomes framework is in more prevalent use today.
Generalizing LATE
[edit]The primary goal of running an experiment is to obtain causal leverage, and it does so by randomly assigning subjects to experimental conditions, which sets it apart from observational studies. In an experiment with perfect compliance, the average treatment effect can be obtained. However, many experiments are likely to experience either one-sided or two-sided non-compliance. In the presence of non-compliance, the ATE can no longer be recovered. Instead, what is recovered is the average treatment effect for a certain subpopulation known as the compliers, which is the LATE.
When there may exist heterogeneous treatment effects across groups, the LATE is unlikely to be equivalent to the ATE. In one example, Angrist (1989)[16] attempts to estimate the causal effect of serving in the military on earnings, using the draft lottery as an instrument. The compliers are those who were induced by the draft lottery to serve in the military. If the research interest is on how to compensate those involuntarily taxed by the draft, LATE would be useful, since the research targets compliers. However, if researchers are concerned about a more universal draft for future interpretation, then the ATE would be more important (Imbens 2009).[1]
Generalizing from the LATE to the ATE thus becomes an important issue when the research interest lies with the causal treatment effect on a broader population, not just the compliers. In these cases, the LATE may not be the parameter of interest, and researchers have questioned its utility.[17][18] Other researchers, however, have countered this criticism by proposing new methods to generalize from the LATE to the ATE.[19][20][21] Most of these involve some form of reweighting from the LATE, under certain key assumptions that allow for extrapolation from the compliers.
Reweighting
[edit]The intuition behind reweighting comes from the notion that given a certain strata, the distribution among the compliers may not reflect the distribution of the broader population. Thus, to retrieve the ATE, it is necessary to reweight based on the information gleaned from compliers. There are a number of ways that reweighting can be used to obtain the ATE from the LATE.
Reweighting by ignorability assumption
[edit]By leveraging instrumental variables, Aronow and Carnegie (2013)[19] propose a new reweighting method called Inverse Compliance Score weighting (ICSW), with a similar intuition behind IPW. This method assumes compliance propensity is a pre-treatment covariate and compliers would have the same average treatment effect within their strata. ICSW first estimates the conditional probability of being a complier (Compliance Score) for each subject by Maximum Likelihood estimator given covariates control, then reweights each unit by its inverse of compliance score, so that compliers would have covariate distribution that matches the full population. ICSW is applicable at both one-sided and two-sided noncompliance situation.
Although one's compliance score cannot be directly observed, the probability of compliance can be estimated by observing the compliance condition from the same strata, in other words those that share the same covariate profile. The compliance score is treated as a latent pretreatment covariate, which is independent of treatment assignment . For each unit , compliance score is denoted as , where is the covariate vector for unit .
In one-sided noncompliance case, the population consists of only compliers and never-takers. All units assigned to the treatment group that take the treatment will be compliers. Thus, a simple bivariate regression of D on X can predict the probability of compliance.
In two-sided noncompliance case, compliance score is estimated using maximum likelihood estimation.
By assuming probit distribution for compliance and of Bernoulli distribution of D,
where .
and is a vector of covariates to be estimated, is the cumulative distribution function for a probit model
- ICSW estimator
By the LATE theorem,[1] average treatment effect for compliers can be estimated with equation:
Define the ICSW estimator is simply weighted by:
This estimator is equivalent to using 2SLS estimator with weight .
- Core assumptions under reweighting
An essential assumption of ICSW relying on treatment homogeneity within strata, which means the treatment effect should on average be the same for everyone in the strata, not just for the compliers. If this assumption holds, LATE is equal to ATE within some covariate profile. Denote as:
Notice this is a less restrictive assumption than the traditional Ignorability assumption, as this only concerns the covariate sets that are relevant to compliance score, which further leads to heterogeneity, without considering all sets of covariates.
The second assumption is consistency of for and the third assumption is the nonzero compliance for each strata, which is an extension of IV assumption of nonzero compliance over population. This is a reasonable assumption as if compliance score is zero for certain strata, the inverse of it would be infinite.
ICSW estimator is more sensible than that of IV estimator, as it incorporate more covariate information, such that the estimator might have higher variances. This is a general problem for IPW-style estimation. The problem is exaggerated when there is only a small population in certain strata and compliance rate is low. One way to compromise it to winsorize the estimates, in this paper they set the threshold as =0.275. If compliance score for lower than 0.275, it is replaced by this value. Bootstrap is also recommended in the entire process to reduce uncertainty(Abadie 2002).[22]
Reweighting under monotonicity assumption
[edit]This section needs additional citations for verification. (December 2018) |
In another approach, one might assume that an underlying utility model links the never-takers, compliers, and always-takers. The ATE can be estimated by reweighting based on an extrapolation of the complier treated and untreated potential outcomes to the never-takers and always-takers. The following method is one that has been proposed by Amanda Kowalski.[21]
First, all subjects are assumed to have a utility function, determined by their individual gains from treatment and costs from treatment. Based on an underlying assumption of monotonicity, the never-takers, compliers, and always-takers can be arranged on the same continuum based on their utility function. This assumes that the always-takers have such a high utility from taking the treatment that they will take it even without encouragement. On the other hand, the never-takers have such a low utility function that they will not take the treatment despite encouragement. Thus, the never-takers can be aligned with the compliers with the lowest utilities, and the always-takers with the compliers with the highest utility functions.
In an experimental population, several aspects can be observed: the treated potential outcomes of the always-takers (those who are treated in the control group); the untreated potential outcomes of the never-takers (those who remain untreated in the treatment group); the treated potential outcomes of the always-takers and compliers (those who are treated in the treatment group); and the untreated potential outcomes of the compliers and never-takers (those who are untreated in the control group). However, the treated and untreated potential outcomes of the compliers should be extracted from the latter two observations. To do so, the LATE must be extracted from the treated population.
Assuming no defiers, it can be assumed that the treated group in the treatment condition consists of both always-takers and compliers. From the observations of the treated outcomes in the control group, the average treated outcome for always-takers can be extracted, as well as their share of the overall population. As such, the weighted average can be undone and the treated potential outcome for the compliers can be obtained; then, the LATE is subtracted to get the untreated potential outcomes for the compliers. This move will then allow extrapolation from the compliers to obtain the ATE.
Returning to the weak monotonicity assumption, which assumes that the utility function always runs in one direction, the utility of a marginal complier would be similar to the utility of a never-taker on one end, and that of an always-taker on the other end. The always-takers will have the same untreated potential outcomes as the compliers, which is its maximum untreated potential outcome. Again, this is based on the underlying utility model linking the subgroups, which assumes that the utility function of an always-taker would not be lower than the utility function of a complier. The same logic would apply to the never-takers, who are assumed to have a utility function that will always be lower than that of a complier.
Given this, extrapolation is possible by projecting the untreated potential outcomes of the compliers to the always-takers, and the treated potential outcomes of the compliers to the never-takers. In other words, if it is assumed that the untreated compliers are informative about always-takers, and the treated compliers are informative about never-takers, then comparison is now possible among the treated always-takers to their “as-if” untreated always-takers, and the untreated never-takers can be compared to their “as-if” treated counterparts. This will then allow the calculation of the overall treatment effect. Extrapolation under the weak monotonicity assumption will provide a bound, rather than a point-estimate.
Limitations
[edit]The estimation of the extrapolation to ATE from the LATE requires certain key assumptions, which may vary from one approach to another. While some may assume homogeneity within covariates, and thus extrapolate based on strata,[19] others may instead assume monotonicity.[21] All will assume the absence of defiers within the experimental population. Some of these assumptions may be weaker than others—for example, the monotonicity assumption is weaker than the ignorability assumption. However, there are other trade-offs to consider, such as whether the estimates produced are point-estimates, or bounds. Ultimately, the literature on generalizing the LATE relies entirely on key assumptions. It is not a design-based approach per se, and the field of experiments is not usually in the habit of comparing groups unless they are randomly assigned. Even in case when assumptions are difficult to verify, researchers can incorporate through the foundation of experiment design. For example, in a typical field experiment where instrument is “encouragement to treatment”, treatment heterogeneity could be detected by varying intensity of encouragement. If the compliance rate remains stable under different intensity, if could be a signal of homogeneity across groups.
References
[edit]- ^ a b c d Imbens, Guido W.; Angrist, Joshua D. (March 1994). "Identification and Estimation of Local Average Treatment Effects" (PDF). Econometrica. 62 (2): 467. doi:10.2307/2951620. ISSN 0012-9682. JSTOR 2951620.
- ^ a b The Committee for the Prize in Economic Sciences in Memory of Alfred Nobel (2021-10-11). "Answering causal questions using observational data. Scientific Background on the Sveriges Riksbank Prize in Economic Sciences in Memory of Alfred Nobel 2021" (PDF).
- ^ Moerbeek, M., & Schie, S. van. (2019). What are the statistical implications of treatment non‐compliance in cluster randomized trials: A simulation study. In Statistics in Medicine (Vol. 38, Issue 26, pp. 5071–5084). Wiley. https://doi.org/10.1002/sim.8351
- ^ Baker, Stuart G.; Lindeman, Karen S. (1994-11-15). "The paired availability design: A proposal for evaluating epidural analgesia during labor". Statistics in Medicine. 13 (21): 2269–2278. doi:10.1002/sim.4780132108. ISSN 0277-6715. PMID 7846425.
- ^ a b Baker, Stuart G.; Kramer, Barnett S.; Lindeman, Karen S. (2018-10-30). ""Latent class instrumental variables: A clinical and biostatistical perspective"". Statistics in Medicine. 38 (5): 901. doi:10.1002/sim.6612. ISSN 0277-6715. PMC 4715605. PMID 30761594.
- ^ Swanson, Sonja A.; Hernán, Miguel A.; Miller, Matthew; Robins, James M.; Richardson, Thomas S. (2018-04-03). "Partial Identification of the Average Treatment Effect Using Instrumental Variables: Review of Methods for Binary Instruments, Treatments, and Outcomes". Journal of the American Statistical Association. 113 (522): 933–947. doi:10.1080/01621459.2018.1434530. ISSN 0162-1459. PMC 6752717. PMID 31537952.
- ^ Lee, Kwonsang; Lorch, Scott A.; Small, Dylan S. (2019-02-20). "Sensitivity analyses for average treatment effects when outcome is censored by death in instrumental variable models". Statistics in Medicine. 38 (13): 2303–2316. arXiv:1802.06711. doi:10.1002/sim.8117. ISSN 0277-6715. PMID 30785641. S2CID 73458979.
- ^ Sheng, E (2019). "Estimating causal effects of treatment in RCTs with provider and subject noncompliance". Statistics in Medicine. 38 (5): 738–750. doi:10.1002/sim.8012. PMID 30347462. S2CID 53035814.
- ^ Wang, L (2016). "Bounded, efficient and multiply robust estimation of average treatment effects using instrumental variable". arXiv:1611.09925v4.
{{cite journal}}
: Cite journal requires|journal=
(help) - ^ Bloom, Howard S. (April 1984). "Accounting for No-Shows in Experimental Evaluation Designs". Evaluation Review. 8 (2): 225–246. doi:10.1177/0193841X8400800205. ISSN 0193-841X.
- ^ Baker, Stuart G.; Lindeman, Karen S. (2024-04-02). "Multiple Discoveries in Causal Inference: LATE for the Party". CHANCE. 37 (2): 21–25. doi:10.1080/09332480.2024.2348956. ISSN 0933-2480. PMC 11218811. PMID 38957370.
- ^ Rubin, Donald B. (January 1978). "Bayesian Inference for Causal Effects: The Role of Randomization". The Annals of Statistics. 6 (1): 34–58. doi:10.1214/aos/1176344064. ISSN 0090-5364.
- ^ a b Angrist, Joshua D.; Imbens, Guido W.; Rubin, Donald B. (June 1996). "Identification of Causal Effects Using Instrumental Variables" (PDF). Journal of the American Statistical Association. 91 (434): 444–455. doi:10.1080/01621459.1996.10476902. ISSN 0162-1459.
- ^ Imbens, G. W.; Rubin, D. B. (1997-10-01). "Estimating Outcome Distributions for Compliers in Instrumental Variables Models". The Review of Economic Studies. 64 (4): 555–574. doi:10.2307/2971731. ISSN 0034-6527. JSTOR 2971731.
- ^ Hanck, Christoph (2009-10-24). "Joshua D. Angrist and Jörn-Steffen Pischke (2009): Mostly Harmless Econometrics: An Empiricist's Companion". Statistical Papers. 52 (2): 503–504. doi:10.1007/s00362-009-0284-y. ISSN 0932-5026.
- ^ Angrist, Joshua (September 1990). "The Draft Lottery and Voluntary Enlistment in the Vietnam Era". Cambridge, MA. doi:10.3386/w3514.
{{cite journal}}
: Cite journal requires|journal=
(help) - ^ Deaton, Angus (January 2009). "Instruments of development: Randomization in the tropics, and the search for the elusive keys to economic development". Cambridge, MA. doi:10.3386/w14690.
{{cite journal}}
: Cite journal requires|journal=
(help) - ^ Heckman, James J.; Urzúa, Sergio (May 2010). "Comparing IV with structural models: What simple IV can and cannot identify". Journal of Econometrics. 156 (1): 27–37. doi:10.1016/j.jeconom.2009.09.006. ISSN 0304-4076. PMC 2861784. PMID 20440375.
- ^ a b c Aronow, Peter M.; Carnegie, Allison (2013). "Beyond LATE: Estimation of the Average Treatment Effect with an Instrumental Variable". Political Analysis. 21 (4): 492–506. doi:10.1093/pan/mpt013. ISSN 1047-1987.
- ^ Imbens, Guido W (June 2010). "Better LATE Than Nothing: Some Comments on Deaton (2009) and Heckman and Urzua (2009)" (PDF). Journal of Economic Literature. 48 (2): 399–423. doi:10.1257/jel.48.2.399. ISSN 0022-0515. S2CID 14375060.
- ^ a b c Kowalski, Amanda (2016). "Doing More When You're Running LATE: Applying Marginal Treatment Effect Methods to Examine Treatment Effect Heterogeneity in Experiments". NBER Working Paper No. 22363. doi:10.3386/w22363.
- ^ Abadie, Alberto (March 2002). "Bootstrap Tests for Distributional Treatment Effects in Instrumental Variable Models". Journal of the American Statistical Association. 97 (457): 284–292. CiteSeerX 10.1.1.337.3129. doi:10.1198/016214502753479419. ISSN 0162-1459. S2CID 18983937.
Further reading
[edit]- Angrist, Joshua D.; Fernández-Val, Iván (2013). Advances in Economics and Econometrics. Cambridge University Press. pp. 401–434. doi:10.1017/cbo9781139060035.012. ISBN 9781139060035.