当前位置:   article > 正文

因果推断2--深度模型介绍(个人笔记)_tarnet

tarnet

目录

一、方法介绍

1.1TarNet

1.1.1TarNet

1.1.2网络结构

1.1.3counterfactual loss

1.1.4代码实现

1.2Dragonet

1.3DRNet

1.4VCNet

VCNET AND FUNCTIONAL TARGETED REGULARIZATION FOR LEARNING CAUSAL EFFECTS OF CONTINUOUS TREATMENTS

二、待补充


一、方法介绍

1.1TarNet

1.1.1TarNet

        S-Learner和T-Learner都不太好,因为S-Learner是把T和W一起训练,在高维情况下很容易忽视T的影响(too much bias),而T-Learner是各自对T=0和T=1训练两个独立的模型,这样会造成过高的方差(too much variance),与因果关系不大。TARNet是对所有数据进行训练,但是在后面分成了两个branch。

本质:多任务学习

多个treatment分别建模

1.1.2网络结构

我们关注的是这样一种情况:因果关系很简单,已知为(Y1, Y0) ⊥x | t,,没有隐藏的混杂因素。在我们假设的因果模型下,最常见的目标是因果效应推理所使用的应用科学是求得平均处理效果:

我们的结果的一个观点是,当使用协变量调整来估计ITE时,我们指出了一个以前未被解释的方差来源。我们提出了一种新型的正则化,通过学习表示,减少了被处理和对照之间的IPM距离,实现了一种新型的偏差-方差权衡。

考虑到我们的observational的训练样本可能存在针对t的imbalance(t和x不独立)的情况,举个例子:我们想要知道一个药对某个病的作用,由于这个药比较贵,我们采集到的样本用这个药的大部分人是富人,不用这个药的大部分人是穷人,如果说富和穷是X的一个特征,用不用药是t,这个时候如果拟合出来的h1和h0就不准了,特别是h1对穷人,h0对富人都不准。什么时候准呢?当样本来自于随机试验的时候才准,但是我们现在又只有observational data,没办法做试验,所以我们只能对样本的分布进行一个调整。


另一种广泛应用于因果推理的统计方法是加权法。逆向倾向评分加权(Austin, 2011)等方法重新加权观察数据中的单位,以使处理人群和对照组人群更具可比性,并已用于估计条件效应(Cole et al., 2003)。主要的挑战,特别是在高维情况下,是控制估计的方差(Swaminathan & Joachims, 2015)。双稳健方法更进一步,结合倾向评分重加权和c中的协变量调整。

我们的工作与上述所有工作的不同之处在于,我们关注的是估计个体治疗效果的泛化误差方面,而不是渐近一致性,并且我们只关注观察性研究案例,没有随机成分或工具变量。

特别是,估计ITE需要预测与观测到的分布不同的结果。我们的ITE误差上界与bdavidet al.(2007)给出的领域适应的泛化界有相似之处;Mansour et al. (2009);Ben-David等人(2010);Cortes & Mohri(2014)。这些边界采用分布距离度量,如a距离或差异度量,这与我们使用的IPM距离有关。我们的算法类似于Ganin等人(2016)最近提出的领域自适应算法,原则上也类似于其他领域自适应方法(例如Daum ' e III (2007);Pan et al. (2011);Sun et al.(2016))可以适用于这里提出的ITE估计。

在本文中,我们专注于一个相关但更实质性的任务:估计个人治疗效果,建立在反事实错误项。我们提供了关于表示的绝对质量的信息约束。我们还推导了一个更灵活的算法家族,包括非线性假设和ipm形式的更强大的分布度量,如Wasserstein和MMD距离。最后,我们进行了更彻底的实验,包括真实世界的数据集和样本外性能,并表明我们的方法优于先前提出的方法。

1.1.3counterfactual loss

第一项wi的作用:对于treatment group和control group的样本数量不平均的情况做一个修正,使得他们在损失函数中的权重平衡。

第二项是模型复杂度的一个正则惩罚

第三项就是刚刚提到的对representation imbalance的一个修正。其中α用来控制这个修正的力度。当α>0时,这个模型就叫做CFR (Conterfactual Regression),当α=0时,则叫做TARNet (Treatment-Agnostic Representation Network)。实际上效果论文里是CFR好一些。
 

1.1.4代码实现

  1. import tensorflow as tf
  2. from ..layers.gather import Gather_Streams
  3. from ..layers.split import Split_Streams
  4. from ..layers.MLP import MLP
  5. from pickle import load
  6. from typing import Any
  7. class TARNet(tf.keras.Model):
  8. """Return a tarnet sub KERAS API model."""
  9. def __init__(
  10. self,
  11. normalizer_layer: tf.keras.layers.Layer = None,
  12. n_treatments: int = 2,
  13. output_dim: int = 1,
  14. phi_layers: int = 2,
  15. units:int = 20,
  16. y_layers: int = 3,
  17. activation: str = "relu",
  18. reg_l2: float = 0.0,
  19. treatment_as_input: bool = False,
  20. scaler: Any = None,
  21. output_bias: float = None,
  22. ):
  23. """Initialize the layers used by the model.
  24. Args:
  25. normalizer_layer (tf.keras.layer, optional): _description_. Defaults to None.
  26. n_treatments (int, optional): _description_. Defaults to 2.
  27. output_dim (int, optional): _description_. Defaults to 1.
  28. phi_layers (int, optional): _description_. Defaults to 2.
  29. y_layers (int, optional): _description_. Defaults to 3.
  30. activation (str, optional): _description_. Defaults to "relu".
  31. reg_l2 (float, optional): _description_. Defaults to 0.0.
  32. """
  33. super(TARNet, self).__init__()
  34. # uniform quantile transform for treatment
  35. self.scaler = scaler if scaler else load(open("scaler.pkl", "rb"))
  36. # input normalization layer
  37. self.normalizer_layer = normalizer_layer
  38. self.phi = MLP(
  39. units=units,
  40. activation=activation,
  41. name="phi",
  42. num_layers=phi_layers,
  43. )
  44. self.splitter = Split_Streams()
  45. self.y_hiddens = [
  46. MLP(
  47. units=units,
  48. activation=activation,
  49. name=f"y_{k}",
  50. num_layers=y_layers,
  51. )
  52. for k in range(n_treatments)
  53. ]
  54. # add linear function to cover the normalized output
  55. self.y_outputs = [
  56. tf.keras.layers.Dense(
  57. output_dim,
  58. activation="sigmoid",
  59. bias_initializer=output_bias,
  60. name=f"top_{k}",
  61. )
  62. for k in range(n_treatments)
  63. ]
  64. self.n_treatments = n_treatments
  65. self.output_ = Gather_Streams()
  66. def call(self, x):
  67. cofeatures_input, treatment_input = x
  68. treatment_cat = tf.cast(treatment_input, tf.int32)
  69. if self.normalizer_layer:
  70. cofeatures_input = self.normalizer_layer(cofeatures_input)
  71. x_flux = self.phi(cofeatures_input)
  72. streams = [
  73. self.splitter([x_flux, treatment_cat, tf.cast(indice_treatment, tf.int32)])
  74. for indice_treatment in range(len(self.y_hiddens))
  75. ]
  76. # xstream is a list of tuple, containing the gathered and indice position, let's unpack them
  77. x_streams, indice_streams = zip(*streams)
  78. # tf.print(indice_streams, output_stream=sys.stderr)
  79. x_streams = [
  80. y_hidden(x_stream) for y_hidden, x_stream in zip(self.y_hiddens, x_streams)
  81. ]
  82. x_streams = [
  83. y_output(x_stream) for y_output, x_stream in zip(self.y_outputs, x_streams)
  84. ]
  85. return self.output_([x_streams, indice_streams])
  1. import tensorflow as tf
  2. class Split_Streams(tf.keras.layers.Layer):
  3. def __init__(self):
  4. super(Split_Streams, self).__init__()
  5. def call(self, inputs):
  6. x, y, z = inputs
  7. indice_position = tf.reshape(
  8. tf.cast(tf.where(tf.equal(tf.reshape(y, (-1,)), z)), tf.int32),
  9. (-1,),
  10. )
  11. return tf.gather(x, indice_position), indice_position

pd.get_dummies相当于onehot编码,常用与把离散的类别信息转化为onehot编码形式。

1.2Dragonet

多个treatment分别建模 

  1. import tensorflow as tf
  2. import keras.backend as K
  3. from keras.engine.topology import Layer
  4. from keras.metrics import binary_accuracy
  5. from keras.layers import Input, Dense, Concatenate, BatchNormalization, Dropout
  6. from keras.models import Model
  7. from keras import regularizers
  8. def binary_classification_loss(concat_true, concat_pred):
  9. t_true = concat_true[:, 1]
  10. t_pred = concat_pred[:, 2]
  11. t_pred = (t_pred + 0.001) / 1.002
  12. losst = tf.reduce_sum(K.binary_crossentropy(t_true, t_pred))
  13. return losst
  14. def regression_loss(concat_true, concat_pred):
  15. y_true = concat_true[:, 0]
  16. t_true = concat_true[:, 1]
  17. y0_pred = concat_pred[:, 0]
  18. y1_pred = concat_pred[:, 1]
  19. loss0 = tf.reduce_sum((1. - t_true) * tf.square(y_true - y0_pred))
  20. loss1 = tf.reduce_sum(t_true * tf.square(y_true - y1_pred))
  21. return loss0 + loss1
  22. def ned_loss(concat_true, concat_pred):
  23. t_true = concat_true[:, 1]
  24. t_pred = concat_pred[:, 1]
  25. return tf.reduce_sum(K.binary_crossentropy(t_true, t_pred))
  26. def dead_loss(concat_true, concat_pred):
  27. return regression_loss(concat_true, concat_pred)
  28. def dragonnet_loss_binarycross(concat_true, concat_pred):
  29. return regression_loss(concat_true, concat_pred) + binary_classification_loss(concat_true, concat_pred)
  30. def treatment_accuracy(concat_true, concat_pred):
  31. t_true = concat_true[:, 1]
  32. t_pred = concat_pred[:, 2]
  33. return binary_accuracy(t_true, t_pred)
  34. def track_epsilon(concat_true, concat_pred):
  35. epsilons = concat_pred[:, 3]
  36. return tf.abs(tf.reduce_mean(epsilons))
  37. class EpsilonLayer(Layer):
  38. def __init__(self):
  39. super(EpsilonLayer, self).__init__()
  40. def build(self, input_shape):
  41. # Create a trainable weight variable for this layer.
  42. self.epsilon = self.add_weight(name='epsilon',
  43. shape=[1, 1],
  44. initializer='RandomNormal',
  45. # initializer='ones',
  46. trainable=True)
  47. super(EpsilonLayer, self).build(input_shape) # Be sure to call this at the end
  48. def call(self, inputs, **kwargs):
  49. # import ipdb; ipdb.set_trace()
  50. return self.epsilon * tf.ones_like(inputs)[:, 0:1]
  51. def make_tarreg_loss(ratio=1., dragonnet_loss=dragonnet_loss_binarycross):
  52. def tarreg_ATE_unbounded_domain_loss(concat_true, concat_pred):
  53. vanilla_loss = dragonnet_loss(concat_true, concat_pred)
  54. y_true = concat_true[:, 0]
  55. t_true = concat_true[:, 1]
  56. y0_pred = concat_pred[:, 0]
  57. y1_pred = concat_pred[:, 1]
  58. t_pred = concat_pred[:, 2]
  59. epsilons = concat_pred[:, 3]
  60. t_pred = (t_pred + 0.01) / 1.02
  61. # t_pred = tf.clip_by_value(t_pred,0.01, 0.99,name='t_pred')
  62. y_pred = t_true * y1_pred + (1 - t_true) * y0_pred
  63. h = t_true / t_pred - (1 - t_true) / (1 - t_pred)
  64. y_pert = y_pred + epsilons * h
  65. targeted_regularization = tf.reduce_sum(tf.square(y_true - y_pert))
  66. # final
  67. loss = vanilla_loss + ratio * targeted_regularization
  68. return loss
  69. return tarreg_ATE_unbounded_domain_loss
  70. def make_dragonnet(input_dim, reg_l2):
  71. """
  72. Neural net predictive model. The dragon has three heads.
  73. :param input_dim:
  74. :param reg:
  75. :return:
  76. """
  77. t_l1 = 0.
  78. t_l2 = reg_l2
  79. inputs = Input(shape=(input_dim,), name='input')
  80. # representation
  81. x = Dense(units=200, activation='elu', kernel_initializer='RandomNormal')(inputs)
  82. x = Dense(units=200, activation='elu', kernel_initializer='RandomNormal')(x)
  83. x = Dense(units=200, activation='elu', kernel_initializer='RandomNormal')(x)
  84. t_predictions = Dense(units=1, activation='sigmoid')(x)
  85. # HYPOTHESIS
  86. y0_hidden = Dense(units=100, activation='elu', kernel_regularizer=regularizers.l2(reg_l2))(x)
  87. y1_hidden = Dense(units=100, activation='elu', kernel_regularizer=regularizers.l2(reg_l2))(x)
  88. # second layer
  89. y0_hidden = Dense(units=100, activation='elu', kernel_regularizer=regularizers.l2(reg_l2))(y0_hidden)
  90. y1_hidden = Dense(units=100, activation='elu', kernel_regularizer=regularizers.l2(reg_l2))(y1_hidden)
  91. # third
  92. y0_predictions = Dense(units=1, activation=None, kernel_regularizer=regularizers.l2(reg_l2), name='y0_predictions')(
  93. y0_hidden)
  94. y1_predictions = Dense(units=1, activation=None, kernel_regularizer=regularizers.l2(reg_l2), name='y1_predictions')(
  95. y1_hidden)
  96. dl = EpsilonLayer()
  97. epsilons = dl(t_predictions, name='epsilon')
  98. # logging.info(epsilons)
  99. concat_pred = Concatenate(1)([y0_predictions, y1_predictions, t_predictions, epsilons])
  100. model = Model(inputs=inputs, outputs=concat_pred)
  101. return model
  102. def make_tarnet(input_dim, reg_l2):
  103. """
  104. Neural net predictive model. The dragon has three heads.
  105. :param input_dim:
  106. :param reg:
  107. :return:
  108. """
  109. inputs = Input(shape=(input_dim,), name='input')
  110. # representation
  111. x = Dense(units=200, activation='elu', kernel_initializer='RandomNormal')(inputs)
  112. x = Dense(units=200, activation='elu', kernel_initializer='RandomNormal')(x)
  113. x = Dense(units=200, activation='elu', kernel_initializer='RandomNormal')(x)
  114. t_predictions = Dense(units=1, activation='sigmoid')(inputs)
  115. # HYPOTHESIS
  116. y0_hidden = Dense(units=100, activation='elu', kernel_regularizer=regularizers.l2(reg_l2))(x)
  117. y1_hidden = Dense(units=100, activation='elu', kernel_regularizer=regularizers.l2(reg_l2))(x)
  118. # second layer
  119. y0_hidden = Dense(units=100, activation='elu', kernel_regularizer=regularizers.l2(reg_l2))(y0_hidden)
  120. y1_hidden = Dense(units=100, activation='elu', kernel_regularizer=regularizers.l2(reg_l2))(y1_hidden)
  121. # third
  122. y0_predictions = Dense(units=1, activation=None, kernel_regularizer=regularizers.l2(reg_l2), name='y0_predictions')(
  123. y0_hidden)
  124. y1_predictions = Dense(units=1, activation=None, kernel_regularizer=regularizers.l2(reg_l2), name='y1_predictions')(
  125. y1_hidden)
  126. dl = EpsilonLayer()
  127. epsilons = dl(t_predictions, name='epsilon')
  128. # logging.info(epsilons)
  129. concat_pred = Concatenate(1)([y0_predictions, y1_predictions, t_predictions, epsilons])
  130. model = Model(inputs=inputs, outputs=concat_pred)
  131. return model
  132. def make_ned(input_dim, reg_l2=0.01):
  133. """
  134. Neural net predictive model. The dragon has three heads.
  135. :param input_dim:
  136. :param reg:
  137. :return:
  138. """
  139. inputs = Input(shape=(input_dim,), name='input')
  140. # representation
  141. x = Dense(units=200, activation='elu', kernel_initializer='RandomNormal', name='ned_hidden1')(inputs)
  142. x = Dense(units=200, activation='elu', kernel_initializer='RandomNormal', name='ned_hidden2')(x)
  143. x = Dense(units=200, activation='elu', kernel_initializer='RandomNormal', name='ned_hidden3')(x)
  144. t_predictions = Dense(units=1, activation='sigmoid', name='ned_t_activation')(x)
  145. y_predictions = Dense(units=1, activation=None, name='ned_y_prediction')(x)
  146. concat_pred = Concatenate(1)([y_predictions, t_predictions])
  147. model = Model(inputs=inputs, outputs=concat_pred)
  148. return model
  149. def post_cut(nednet, input_dim, reg_l2=0.01):
  150. for layer in nednet.layers:
  151. layer.trainable = False
  152. nednet.layers.pop()
  153. nednet.layers.pop()
  154. nednet.layers.pop()
  155. frozen = nednet
  156. x = frozen.layers[-1].output
  157. frozen.layers[-1].outbound_nodes = []
  158. input = frozen.input
  159. y0_hidden = Dense(units=100, activation='elu', kernel_regularizer=regularizers.l2(reg_l2), name='post_cut_y0_1')(x)
  160. y1_hidden = Dense(units=100, activation='elu', kernel_regularizer=regularizers.l2(reg_l2), name='post_cut_y1_1')(x)
  161. # second layer
  162. y0_hidden = Dense(units=100, activation='elu', kernel_regularizer=regularizers.l2(reg_l2), name='post_cut_y0_2')(
  163. y0_hidden)
  164. y1_hidden = Dense(units=100, activation='elu', kernel_regularizer=regularizers.l2(reg_l2), name='post_cut_y1_2')(
  165. y1_hidden)
  166. # third
  167. y0_predictions = Dense(units=1, activation=None, kernel_regularizer=regularizers.l2(reg_l2), name='y0_predictions')(
  168. y0_hidden)
  169. y1_predictions = Dense(units=1, activation=None, kernel_regularizer=regularizers.l2(reg_l2), name='y1_predictions')(
  170. y1_hidden)
  171. concat_pred = Concatenate(1)([y0_predictions, y1_predictions])
  172. model = Model(inputs=input, outputs=concat_pred)
  173. return model

1.3DRNet

Learning Counterfactual Representations for Estimating Individual Dose-Response Curves

 

1.4VCNet

VCNET AND FUNCTIONAL TARGETED REGULARIZATION FOR LEARNING CAUSAL EFFECTS OF CONTINUOUS TREATMENTS

处理连续的treatment

 

本文基于varying coefficient model,让treatment对应的branch成为treatment的函数,而不需要单独设计branch,达到真正的连续性。除此之外,本文也沿用了《Adapting Neural Networks for the Estimation of Treatment Effects》一文中的思路,训练一个分类器来抽取协变量中与T最相关的部分变量。
 

参考:

  1. [Dragonet] Adapting Neural Network for the Estimation of Treatment Effects - 知乎
  2. BradyNeal因果推断课程笔记8-经典模型框架DML、S-learner、T-learner、X-learner、TARNet
  3. 《因果推理导论》课程(2020) by Brady Neal_哔哩哔哩_bilibili
  4. Shi, Claudia, David Blei, and Victor Veitch. "Adapting neural networks for the estimation of treatment effects." Advances in neural information processing systems 32 (2019).

  5. Shalit, Uri, Fredrik D. Johansson, and David Sontag. "Estimating individual treatment effect: generalization bounds and algorithms." International Conference on Machine Learning. PMLR, 2017.

  6. 详解“因果效应估计”_AI_盲的博客-CSDN博客_连续型干预因果效应估计

  7. Search :: Anaconda.org

  8. dragonnet/models.py at master · claudiashi57/dragonnet · GitHub

  9. PackagesNotFoundError: The following packages are not available from current channels的解决办法_正在学习的黄老师的博客-CSDN博客_packagesnotfounderror

  10. DRNet:Learning Counterfactual Representations for Estimating Individual Dose-Response Curves

二、待补充

  1. 论文精读:DRNet、VCNet
  2. 相关代码细节
  3. 工业应用实例

声明:本文内容由网友自发贡献,不代表【wpsshop博客】立场,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:https://www.wpsshop.cn/w/AllinToyou/article/detail/238771
推荐阅读
相关标签
  

闽ICP备14008679号