当前位置:   article > 正文

论文代码阅读及部分复现:Deep Lasso_hoplite论文代码复现

hoplite论文代码复现

论文地址:A Performance-Driven Benchmark for Feature Selection in Tabular Deep Learning | OpenReview

论文代码:

GitHub - vcherepanova/tabular-feature-selection: Repository for the feature selection with tabular models

数据同FT-Transformer那一篇 https://www.dropbox.com/s/o53umyg6mn3zhxy/  

这篇文章提出了新的特征筛选方法:Deep-Lasso

一、论文概述。

 通常真是数据集中很少回包含“对于预测完全没有帮助的高斯噪声特征”,但是以往在评估特征选择算法时,会手动造数据并包含由高斯分布生成的“纯噪音特征”,不仅和事实大相径庭,而且这也使得特征选择算法的任务变得比真实情况更加“简单”了;所以作者基于真实数据集构造特征选择的评估基准,并加入了随机噪音特征、损坏特征和作为特征工程原型的二阶特征(有可能是冗余特征),通过对特征选择后的数据下游模型(MLP和FT-Transformer)效能评估来评估特征选择的效果。

二、实验方法

实验选用了真实世界存在的数据集,分别对它们加入了以下“额外特征”用以让上游模型进行特征筛选,以此传入下游模型训练:

随机特征:直接加入符合高斯分布的噪声特征
被破坏的特征:从原始特征中选出某些特征使用高斯噪音或拉普拉斯噪声破坏
二阶特征:随机选择原始特征的乘积,此处是模拟特征工程中造出冗余数据的场景。当然,也有可能造出有利于预测的“非噪声”,必须根据下游模型的性能进行评估。

在实际实验中,会将上游模型和下游模型训练的过程打包作为一个过程,给到Optuna训练超参,最后再使用不同的随机数种子进行模型训练。

实验使用了以下的特征选择方法:

1、单变量统计检验:对于分类模型,使用的是ANOVA方差分析,通过组内平方和与组件平方和之间的度量获得F检验统计量,以此判断不同分类下对应特征之间是否存在显著差异;而对于回归问题则使用F检验,通过回归平方和与残差平方和之间的度量获得F检验统计量,以此判断特征与应变量之间是否存在显著的线性关系。
2、加入L1正则项以促进模型的稀疏性,根据各个特征的系数做排序
3、1L Lasso(First Layer Lasso):对于MLP中的第一层隐藏层加入Group Lasso,并将第一层各个特征的分组权重加以排序。

min_\theta\alpha L_\theta(X,Y)+(1-\alpha)\sum _{j=1}^{m}||W^{(j)}||_2

式中前半段是第一层的损失函数,而后半段就是每一个特征对应权重的二阶范数作为惩罚项,以此作特征筛选。
4、自适应Lasso:和上面的1L Lasso类似,但是对每一个特征的系数又会使用自适应权重参数加权

min_\theta\alpha L_\theta(X,Y)+(1-\alpha)\sum _{j=1}^{m}\frac{1}{||\widehat{W}^{(j)}||_2^{\gamma }}||W^{(j)}||_2

式中的W_hat是W的一个自适应系数。
5、LassoNet:融合了特征选择的神经网络结构,使用一个“Skip层”的结构来控制要进入后面隐层的特征数量,从而实现特征的稀疏化。
6、随机森林:决策树的bagging组合,根据特征对集合的贡献进行排序;具体而言就是在节点分裂时,选用的特征能够减少多少数据的不纯度
7、XGBoost:GBDT算法中最流行的实现;计算特征重要性的策略是对于每个使用该特征的节点计算平均增益
8、Attention Map:对于像FT-Transformer这样的用到了注意力机制的模型,使用Attention Map的方式来度量特征重要性。Attention Map可以理解为Transformer模型在前向传播时,对于注意力(缩放后的q点积k)进行softmax归一化之后的结果,可以被认为是特征筛选的一环。
9、论文作者提出的Deep Lasso,是可微分模型的一个Lasso推广,对每个特征的梯度加入Lasso惩罚项,使得梯度稀疏,认为这样可以使得模型对不相关的特征变化具有鲁棒性。

min_\theta\alpha L_\theta(X,Y)+(1-\alpha)\sum _{j=1}^{m}|| \frac{\partial L_\theta(X,Y)}{\partial X^{(j)}} ||_2

相比于正常的1L Lasso外,这里的的惩罚项变为了损失函数梯度的二阶范数。当训练完成后,就可以用损失函数梯度的二阶范数作为每个特征的重要性的度量了:

|| \frac{\partial L_\theta(X,Y)}{\partial X^{(j)}} ||_2

三、实验细节

本节会节选一些重要代码进行讲解,完整实验代码可以去本文最开始的github页面查看。

1、噪音制造

  1. if add_noise == 'random_feats':
  2. np.random.seed(0)
  3. n_feats = int(numerical_array.shape[1]/(1-noise_percent)*noise_percent)
  4. uninformative_features = np.random.randn(numerical_array.shape[0], n_feats)
  5. numerical_array = np.concatenate([numerical_array, uninformative_features], axis=1)
  6. elif add_noise == 'corrupted_feats':
  7. np.random.seed(0)
  8. n_max = int(numerical_array.shape[1] / 0.1 * 0.9)
  9. n_feats = int(numerical_array.shape[1]/(1-noise_percent)*noise_percent)
  10. features_idx = np.random.choice(numerical_array.shape[1], n_max, replace=True)[:n_feats]
  11. features_copy = numerical_array[:,features_idx]
  12. features_std = np.nanstd(features_copy, axis=0)
  13. alpha_noise = 0.5
  14. corrupted_features = (1-alpha_noise)*features_copy + alpha_noise*np.random.randn(numerical_array.shape[0], n_feats)*features_std
  15. numerical_array = np.concatenate([numerical_array, corrupted_features], axis=1)
  16. elif add_noise == 'secondorder_feats':
  17. np.random.seed(0)
  18. n_max = int(numerical_array.shape[1]/0.1*0.9)
  19. n_feats = int(numerical_array.shape[1]/(1-noise_percent)*noise_percent)
  20. features_1 = np.random.choice(numerical_array.shape[1], n_max, replace=True)[:n_feats]
  21. features_2 = np.random.choice(numerical_array.shape[1], n_max, replace=True)[:n_feats]
  22. second_order_features = numerical_array[:,features_1]*numerical_array[:,features_2]
  23. numerical_array = np.concatenate([numerical_array, second_order_features], axis=1)

random_feats(随机噪声) 就直接用标准正态分布生成;而corrupted_feats(被破坏的特征)则是选择几个连续特征,按比例加入0均值同方差的噪声;secondorder_feats(二阶特征)则是随机选择多个特侦将它们相乘成为新的特征加入数据集中。

除此之外还有一个需要注意的点:在处理数据时,他们直接将原本的train、val、test三个npy文件的数据糅合在一起加入噪声并,然后再用train_test_split重新划分训练集验证集与测试集。

对于分类特征而言,他们直接将缺失值作为一个新的分类处理。

而对于连续特征而言,使用均值填充缺失值,之后使用中心化缩放或是分位数转换作为预处理。

对于因变量y而言,回归问题使用中心化预处理,而分类问题一般不作预处理。

2、提取特征重要性

我们逐一来看看论文中是如何针对不同的的上游模型进行特征提取:

  1. if cfg.dataset.task == 'regression':
  2. if cfg.model.name=='xgboost':
  3. model = xgb.XGBRegressor(**cfg.model, seed = cfg.hyp.seed)
  4. elif cfg.model.name == "univariate":
  5. model = SelectKBest(score_func=f_regression, k="all")
  6. elif cfg.model.name == "lasso":
  7. model = Lasso(alpha=cfg.model.alpha, random_state=cfg.hyp.seed)
  8. elif cfg.model.name == 'forest':
  9. model = RandomForestRegressor(n_estimators=cfg.model.n_estimators,
  10. max_depth=cfg.model.max_depth,
  11. random_state=cfg.hyp.seed,
  12. n_jobs=-1)
  13. else:
  14. raise NotImplementedError('Model is not implemented')
  15. else:
  16. if cfg.model.name == 'xgboost':
  17. model = xgb.XGBClassifier(**cfg.model, seed = cfg.hyp.seed)
  18. elif cfg.model.name == "univariate":
  19. model = SelectKBest(score_func=f_classif, k="all")
  20. elif cfg.model.name == "lasso":
  21. model = LogisticRegression(penalty='l1', solver="saga",
  22. C=cfg.model.alpha, random_state=cfg.hyp.seed)
  23. elif cfg.model.name == 'forest':
  24. model = RandomForestClassifier(n_estimators=cfg.model.n_estimators,
  25. max_depth=cfg.model.max_depth,
  26. random_state=cfg.hyp.seed,
  27. n_jobs=-1)
  28. else:
  29. raise NotImplementedError('Model is not implemented')

首先是非神经网络类的模型,被作者归类在了"classical_model"里面,也就是xgboost,单变量统计检验,Lasso和随机森林。注意此处的单变量统计检验实际上并非一个“模型”,而是scikit-learn中的SelectKBest函数。

  1. # Feature Selection
  2. if cfg.mode == 'feature_selection':
  3. if cfg.model.name == 'xgboost':
  4. importances = model.feature_importances_
  5. elif cfg.model.name == 'univariate':
  6. importances = np.abs(model.scores_)
  7. elif cfg.model.name == 'lasso':
  8. importances = np.abs(model.coef_)
  9. elif cfg.model.name == 'forest':
  10. importances = model.feature_importances_
  11. else:
  12. raise NotImplementedError('Model is not implemented')

模型的重要性提取相对简单,基本上所有的包都有对应的方法可以直接条用。

需要注意的是“classical_model”中,除了xgboost以外,其余的模型都无法处理带有分类特征的问题。

而对于神经网络模型而言,主要有FT_Transformer、MLP以及ResNet三大类基础模型。其中MLP和ResNet较为常见,此处不作赘述;而FT_Transformer简而言之就是在传入Transformer之前加了一个Feature Tokenizer:将连续变量作线性变换,离散变量作Embedding,最后再加入一个CLS向量作为结果向量;将处理完的数据传入一个Encoder-only的Transformer中。

对于神经网络类模型的特征筛选,大多数都是在trainning.py文件中,给模型添加一个正则项用以作特征筛选:

  1. def add_dimension_glasso(var, dim=0):
  2. return var.pow(2).sum(dim=dim).add(1e-8).pow(1/2.).sum()
  3. if hyp.regularization=='deep_lasso':
  4. grad_params = autograd.grad(loss, inputs_num, create_graph=True, allow_unused=True)
  5. reg = add_dimension_glasso(grad_params[0], dim=0)
  6. loss = hyp.reg_weight*reg + (1-hyp.reg_weight)*loss
  7. elif hyp.regularization=='lasso':
  8. reg = add_dimension_glasso(net.module.head.weight)
  9. loss = hyp.reg_weight*reg + (1-hyp.reg_weight)*loss
  10. elif hyp.regularization=='first_lasso':
  11. reg = add_dimension_glasso(net.module.layers[0].weight)
  12. loss = hyp.reg_weight * reg + (1 - hyp.reg_weight) * loss
  13. loss.backward()
  14. optimizer.step()
  15. train_loss += loss.item()
  16. total += targets.size(0)
  17. if hyp.regularization == 'deep_lasso':
  18. grad_avg += grad_params[0].detach().cpu().abs().mean(0)
  19. del grad_params

 add_dimension_glasso函数就是针对神经网络中的某层中加入L1正则项的函数,注意此处在二阶范数中还加入了一个1e-8。我个人理解这是为了避免求和为0的平滑项。

对于正则项为lasso的部分,我们将正则项加在网络的最后一层输出层(head),而正则项为1L Lasso的部分则加载网络的第一层(layers[0])。

至于Deep Lasso,代码中首先使用了autograd.grad来求得梯度。这里简单介绍一下autograd.grad函数:

第一个outputs函数是要求梯度的因变量,也就是要求梯度的损失函数,而第二个inputs函数则是对应损失函数中的自变量。后面的create_graph设置为True时,可以计算高阶梯度;allow_unused设置为True则表示允许部分输入可以没有被用到,否则会报错。(此处代码中inputs参数只传入了inputs_num,没有传入分类特征,可能是因为分类特征本身需要经过Embedding层?此处需要进行实验。)

之后针对这个梯度求出其二阶范数,将其作为惩罚项加入损失函数中。

除此之外,函数还会用一个grad_avg变量维护梯度的均值,但是实际上之后并没有用到这一变量。

而对于FT-Transformer而言,为了保留训练过程中计算出来的attention map(缩放后的点积注意力),则需要在模型定义中做出略微调整:

  1. class MultiheadAttention(nn.Module):
  2. def __init__(
  3. self, d: int, n_heads: int, dropout: float, initialization: str
  4. ) -> None:
  5. if n_heads > 1:
  6. assert d % n_heads == 0
  7. assert initialization in ["xavier", "kaiming"]
  8. super().__init__()
  9. self.W_q = nn.Linear(d, d)
  10. self.W_k = nn.Linear(d, d)
  11. self.W_v = nn.Linear(d, d)
  12. self.W_out = nn.Linear(d, d) if n_heads > 1 else None
  13. self.n_heads = n_heads
  14. self.dropout = nn.Dropout(dropout) if dropout else None
  15. for m in [self.W_q, self.W_k, self.W_v]:
  16. if initialization == "xavier" and (n_heads > 1 or m is not self.W_v):
  17. # gain is needed since W_qkv is represented with 3 separate layers
  18. nn_init.xavier_uniform_(m.weight, gain=1 / math.sqrt(2))
  19. nn_init.zeros_(m.bias)
  20. if self.W_out is not None:
  21. nn_init.zeros_(self.W_out.bias)
  22. def _reshape(self, x: Tensor) -> Tensor:
  23. batch_size, n_tokens, d = x.shape
  24. d_head = d // self.n_heads
  25. return (
  26. x.reshape(batch_size, n_tokens, self.n_heads, d_head)
  27. .transpose(1, 2)
  28. .reshape(batch_size * self.n_heads, n_tokens, d_head)
  29. )
  30. def forward(
  31. self,
  32. x_q: Tensor,
  33. x_kv: Tensor,
  34. key_compression: ty.Optional[nn.Linear],
  35. value_compression: ty.Optional[nn.Linear],
  36. ) -> Tensor:
  37. q, k, v = self.W_q(x_q), self.W_k(x_kv), self.W_v(x_kv)
  38. for tensor in [q, k, v]:
  39. assert tensor.shape[-1] % self.n_heads == 0
  40. if key_compression is not None:
  41. assert value_compression is not None
  42. k = key_compression(k.transpose(1, 2)).transpose(1, 2)
  43. v = value_compression(v.transpose(1, 2)).transpose(1, 2)
  44. else:
  45. assert value_compression is None
  46. batch_size = len(q)
  47. d_head_key = k.shape[-1] // self.n_heads
  48. d_head_value = v.shape[-1] // self.n_heads
  49. n_q_tokens = q.shape[1]
  50. q = self._reshape(q)
  51. k = self._reshape(k)
  52. attention_logits = q @ k.transpose(1, 2) / math.sqrt(d_head_key)
  53. attention_probs = F.softmax(attention_logits, dim=-1)
  54. if self.dropout is not None:
  55. attention_probs = self.dropout(attention_probs)
  56. x = attention_probs @ self._reshape(v)
  57. x = (
  58. x.reshape(batch_size, self.n_heads, n_q_tokens, d_head_value)
  59. .transpose(1, 2)
  60. .reshape(batch_size, n_q_tokens, self.n_heads * d_head_value)
  61. )
  62. if self.W_out is not None:
  63. x = self.W_out(x)
  64. return x, {
  65. 'attention_logits': attention_logits,
  66. 'attention_probs': attention_probs,
  67. }

这里的多头注意力机制定义和原本的定义相同,只不过多了一步将注意力输出的操作。这里的attention_probs变量就是我们之后要拿来作为attentionMap度量的部分。

  1. class SaveAttentionMaps:
  2. def __init__(self):
  3. self.attention_maps = None
  4. #self.n_batches = 0
  5. def __call__(self, _, __, output):
  6. if self.attention_maps is None:
  7. self.attention_maps = output[1]['attention_probs'].detach().cpu().sum(0)
  8. else:
  9. self.attention_maps+=output[1]['attention_probs'].detach().cpu().sum(0)
  10. def get_feat_importance_attention(net, testloader, device):
  11. net.eval()
  12. hook = SaveAttentionMaps()
  13. for block in net.layers:
  14. block['attention'].register_forward_hook(hook)
  15. for batch_idx, (inputs_num, inputs_cat, targets) in enumerate(testloader):
  16. inputs_num, inputs_cat, targets = inputs_num.to(device).float(), inputs_cat.to(device), targets.to(device)
  17. inputs_num, inputs_cat = inputs_num if inputs_num.nelement() != 0 else None, \
  18. inputs_cat if inputs_cat.nelement() != 0 else None
  19. net(inputs_num, inputs_cat)
  20. n_blocks = len(net.layers)
  21. n_objects = len(testloader.dataset)
  22. n_heads = net.layers[0]['attention'].n_heads
  23. n_features = inputs_num.shape[1]
  24. n_tokens = n_features + 1
  25. attention_maps = hook.attention_maps
  26. average_attention_map = attention_maps/(n_objects*n_blocks*n_heads)
  27. assert attention_maps.shape == (n_tokens, n_tokens)
  28. # Calculate feature importance and ranks.
  29. average_cls_attention_map = average_attention_map[0] # consider only the [CLS] token
  30. feature_importance = average_cls_attention_map[1:] # drop the [CLS] token importance
  31. assert feature_importance.shape == (n_features,)
  32. feature_ranks = scipy.stats.rankdata(-feature_importance.numpy())
  33. feature_indices_sorted_by_importance = feature_importance.argsort(descending=True).numpy()
  34. return average_cls_attention_map, feature_importance, feature_ranks, feature_indices_sorted_by_importance

 这里的register_hook方法是用来在作反向传播计算梯度时调用另一个函数。此处构建了一个SaveAttentionMaps类,每次调用时会将attention_probs加总起来最后求平均值。最后将除了平均值作为特征重要性的度量feature_importance,将它排序之后就获得了各个特征的排名。

(average_cls_attention_map那边是不是写错了?)

  1. def get_feat_importance_deeplasso(net, testloader, criterion, device):
  2. net.eval()
  3. grads = []
  4. for batch_idx, (inputs_num, inputs_cat, targets) in enumerate(testloader):
  5. inputs_num, inputs_cat, targets = inputs_num.to(device).float(), inputs_cat.to(device), targets.to(device)
  6. inputs_num, inputs_cat = inputs_num if inputs_num.nelement() != 0 else None, \
  7. inputs_cat if inputs_cat.nelement() != 0 else None
  8. inputs_num.requires_grad_()
  9. outputs = net(inputs_num, inputs_cat)
  10. loss = criterion(outputs, targets)
  11. grad_params = autograd.grad(loss, inputs_num, create_graph=True, allow_unused=True)
  12. grads.append(grad_params[0].detach().cpu())
  13. grads = torch.cat(grads)
  14. importances = grads.pow(2).sum(dim=0).pow(1/2.)
  15. return importances
  16. def get_feat_importance_lasso(net):
  17. importances = net.module.head.weight.detach().cpu().pow(2).sum(dim=0).pow(1 / 2.)
  18. return importances
  19. def get_feat_importance_firstlasso(net):
  20. importances = net.module.layers[0].weight.detach().cpu().pow(2).sum(dim=0).pow(1 / 2.)
  21. return importances

而对于Lasso和1L Lasso,其特征重要性则分别是网络最后一层与第一层权重的二次范数;而deepLasso的特征重要性则需要通过在验证集上正向传播后计算其梯度的二阶范数。注意此处的inputs_cat在构造时已经用torch.empty占位了,所以to(DEVICE)方法不会报错。

3、warm-up

这里的warm-up笔者之前没有相应了解,此处只做简单的阐述:

在神经网络训练初期,初始化的各个权重所计算出来的结果和实际是的距离很可能非常远,从而导致损失函数相当大,反向传播的梯度也很大。此时如果使用了一个很大的学习率,你们就会使得优化“过犹不及”,具体可以看下面的图:

蓝色的学习率过大,容易略过最优点并造成损失函数震荡等问题。而红色的学习率低,不会造成损失震荡或略过最优点,但是会造成训练时间过长的问题。哪怕是学习率衰减也无法避免这一问题。所以大家就期望能够找到一种方法,能够是的学习率现很小,等到之后梯度没那么大时再增大,之后再按照正常规则慢慢衰减。这就是warmup的基本思想。

  1. """ warmup.py
  2. code for warmup learning rate scheduler
  3. borrowed from https://github.com/ArneNx/pytorch_warmup/tree/warmup_fix
  4. and modified July 2020
  5. """
  6. import math
  7. from torch.optim import Optimizer
  8. class BaseWarmup:
  9. """Base class for all warmup schedules
  10. Arguments:
  11. optimizer (Optimizer): an instance of a subclass of Optimizer
  12. warmup_params (list): warmup paramters
  13. last_step (int): The index of last step. (Default: -1)
  14. warmup_period (int or list): Warmup period
  15. """
  16. def __init__(self, optimizer, warmup_params, last_step=-1, warmup_period=0):
  17. if not isinstance(optimizer, Optimizer):
  18. raise TypeError('{} is not an Optimizer'.format(
  19. type(optimizer).__name__))
  20. self.optimizer = optimizer
  21. self.warmup_params = warmup_params
  22. self.last_step = last_step
  23. self.base_lrs = [group['lr'] for group in self.optimizer.param_groups]
  24. self.warmup_period = warmup_period
  25. self.dampen()
  26. def state_dict(self):
  27. """Returns the state of the warmup scheduler as a :class:`dict`.
  28. It contains an entry for every variable in self.__dict__ which
  29. is not the optimizer.
  30. """
  31. return {key: value for key, value in self.__dict__.items() if key != 'optimizer'}
  32. def load_state_dict(self, state_dict):
  33. """Loads the warmup scheduler's state.
  34. Arguments:
  35. state_dict (dict): warmup scheduler state. Should be an object returned
  36. from a call to :meth:`state_dict`.
  37. """
  38. self.__dict__.update(state_dict)
  39. def dampen(self, step=None):
  40. """Dampen the learning rates.
  41. Arguments:
  42. step (int): The index of current step. (Default: None)
  43. """
  44. if step is None:
  45. step = self.last_step + 1
  46. self.last_step = step
  47. if isinstance(self.warmup_period, int) and step < self.warmup_period:
  48. for i, (group, params) in enumerate(zip(self.optimizer.param_groups,
  49. self.warmup_params)):
  50. if isinstance(self.warmup_period, list) and step >= self.warmup_period[i]:
  51. continue
  52. omega = self.warmup_factor(step, **params)
  53. group['lr'] = omega * self.base_lrs[i]
  54. def warmup_factor(self, step, warmup_period):
  55. """Place holder for objects that inherit BaseWarmup."""
  56. raise NotImplementedError
  57. def get_warmup_params(warmup_period, group_count):
  58. if type(warmup_period) == list:
  59. if len(warmup_period) != group_count:
  60. raise ValueError(
  61. 'size of warmup_period does not equal {}.'.format(group_count))
  62. for x in warmup_period:
  63. if type(x) != int:
  64. raise ValueError(
  65. 'An element in warmup_period, {}, is not an int.'.format(
  66. type(x).__name__))
  67. warmup_params = [dict(warmup_period=x) for x in warmup_period]
  68. elif type(warmup_period) == int:
  69. warmup_params = [dict(warmup_period=warmup_period)
  70. for _ in range(group_count)]
  71. else:
  72. raise TypeError('{} is not a list nor an int.'.format(
  73. type(warmup_period).__name__))
  74. return warmup_params
  75. class LinearWarmup(BaseWarmup):
  76. """Linear warmup schedule.
  77. Arguments:
  78. optimizer (Optimizer): an instance of a subclass of Optimizer
  79. warmup_period (int or list): Warmup period
  80. last_step (int): The index of last step. (Default: -1)
  81. """
  82. def __init__(self, optimizer, warmup_period, last_step=-1):
  83. group_count = len(optimizer.param_groups)
  84. warmup_params = get_warmup_params(warmup_period, group_count)
  85. super().__init__(optimizer, warmup_params, last_step, warmup_period)
  86. def warmup_factor(self, step, warmup_period):
  87. return min(1.0, (step+1) / warmup_period)
  88. class ExponentialWarmup(BaseWarmup):
  89. """Exponential warmup schedule.
  90. Arguments:
  91. optimizer (Optimizer): an instance of a subclass of Optimizer
  92. warmup_period (int or list): Effective warmup period
  93. last_step (int): The index of last step. (Default: -1)
  94. """
  95. def __init__(self, optimizer, warmup_period, last_step=-1):
  96. group_count = len(optimizer.param_groups)
  97. warmup_params = get_warmup_params(warmup_period, group_count)
  98. super().__init__(optimizer, warmup_params, last_step, warmup_period)
  99. def warmup_factor(self, step, warmup_period):
  100. if step + 1 >= warmup_period:
  101. return 1.0
  102. else:
  103. return 1.0 - math.exp(-(step+1) / warmup_period)

 实际在代码中,定义了LinearWarmup和ExponentialWarmup两类warm-up实例,在每次训练一个epoch后运行dampen函数以此更新优化器中的学习率。

在warmup类中,传入的参数有:

optimizer:也就是要优化器,目标就是更新这个优化器中的学习率。

warmup_params:本项目中的warmup_params都是由get_warmup_params函数生成的,简单而言就是将warmup_period统一作为由字典组成的列表传入dampen函数中。

last_step:运行dampen函数时前一次的步骤数是什么,每次运行dampen函数时会+1。等到这个变量大于warmup_period时,就不再更新学习率。

warmup_period:warmup的时长,当训练的epoch数超过了这个值时,遍不再进行warmup。

warmup的核心是找出一个omega系数,然后让他乘以学习率。不同的warmup函数就代表着omega系数的计算逻辑。

Linear Warmup是根据训练的阶段线性地增加omega系数,而ExponentialWarmup则是使用指数函数指数级地递增。

四、实验结果

对于随机噪声而言, 下游为MLP时XGBoost、随即森林、单变量统计检验量和Deep Lasso的表现相当,而第一层Lasso、Lasso回归、AGL、LassoNet和AttentionMap的表现较弱。对于下游模型为FT-Transformer时,随机森林和XGBoost的特征选择要优于其他上游模型。

 对于损坏的特征而言,Deep Lasso方法和XGBoost方法的特征选择优于其他方法。在下游模型为FT-Transformer时,XGBoost更佳;而下游模型为MLP时,Deep Lasso的特征选择方法表现更好。

 

最后,对于二阶特征而言,Deep Lasso的效果要明显优于其他上游特征选择模型,特别是有75%的额外特征时,Deep Lasso的效果十分优秀(看最后那一列Rank)。论文作者认为这表明Deep Lasso在涉及大量虚假或冗余特征的更具挑战性的特征选择问题上表现出色。

五、相关复现

1、加入LassoNet

使用LassoNet做特征选择。注意打包好的LassoNet实际上是无法处理特征分类的(模型中没有写Embedding操作),而且LassoNet又加了一层调整参数M以及lambda的外层循环,如果参数量过大,会导致训练时间变长。此处仅仅考虑连续特征。

首先,在train_classical.py中加入LassoNet的相关判断:

  1. ...
  2. elif cfg.model.name == 'lassonet':
  3. model = LassoNetRegressor(hidden_dims=(cfg.model.hidden_dims,),
  4. M=cfg.model.M,
  5. lambda_start="auto"
  6. )
  7. ...
  8. elif cfg.model.name == 'lassonet':
  9. model = LassoNetClassifier(hidden_dims=(cfg.model.hidden_dims,),
  10. M=cfg.model.M,
  11. lambda_start="auto"
  12. )
  13. ...
  14. #模型训练函数
  15. elif cfg.model.name == "lassonet":
  16. model.path(X_train, y_train)
  17. ...
  18. #模型的特征重要性参数
  19. elif cfg.model.name == "lassonet":
  20. importances = np.abs(model.feature_importances_.cpu().numpy())

之后,在tune_full_pipeline.py与run_full_pipeline.py中加入对于LassoNet的传参。

  1. #tune_full_pipeline.py
  2. ...
  3. if model == "lassonet":
  4. hidden_dims = trial.suggest_int("hidden_dims", 1, 512)
  5. M = trial.suggest_int("M",10,50)
  6. model_params = {
  7. 'hidden_dims': hidden_dims,
  8. "M":M
  9. }
  10. training_params = {}
  11. ...
  1. # run_full_pipeline.py
  2. ...
  3. if model == "lassonet":
  4. model_params = {
  5. 'hidden_dims': hypers_dict[f"hidden_dims"],
  6. "M":hypers_dict[f"M"]
  7. }
  8. training_params = {
  9. }
  10. ...

 最后,还要在config/hyp与config/model文件夹下定义yaml设定文件

hyp_for_lassonet.yaml:

  1. save_period: -1
  2. seed: 1

lassonet.yaml

  1. name: lassonet
  2. hidden_dims: 20
  3. M: 20

2、复现结果

  1. python tune_full_pipeline.py model=mlp model_downstream=mlp dataset=california_housing name=deep_lasso_mlp hyp=hyp_for_neural_network hyp_downstream=hyp_for_neural_network dataset.add_noise=corrupted_feats dataset.noise_percent=0.5 hyp.regularization='deep_lasso' topk=0.5
  2. python tune_full_pipeline.py model=mlp model_downstream=mlp dataset=california_housing name=1l_lasso_mlp hyp=hyp_for_neural_network hyp_downstream=hyp_for_neural_network dataset.add_noise=corrupted_feats dataset.noise_percent=0.5 hyp.regularization='first_lasso' topk=0.5
  3. python tune_full_pipeline.py model=xgboost model_downstream=mlp dataset=california_housing name=xgboost_mlp hyp=hyp_for_xgboost hyp_downstream=hyp_for_neural_network dataset.add_noise=corrupted_feats dataset.noise_percent=0.5 topk=0.5
  4. python tune_full_pipeline.py model=mlp model_downstream=mlp dataset=california_housing name=deep_lasso_mlp hyp=hyp_for_neural_network hyp_downstream=hyp_for_neural_network dataset.add_noise=random_feats dataset.noise_percent=0.5 hyp.regularization='deep_lasso' topk=0.5
  5. python tune_full_pipeline.py model=mlp model_downstream=mlp dataset=california_housing name=1l_lasso_mlp hyp=hyp_for_neural_network hyp_downstream=hyp_for_neural_network dataset.add_noise=random_feats dataset.noise_percent=0.5 hyp.regularization='first_lasso' topk=0.5
  6. python tune_full_pipeline.py model=xgboost model_downstream=mlp dataset=california_housing name=xgboost_mlp hyp=hyp_for_xgboost hyp_downstream=hyp_for_neural_network dataset.add_noise=random_feats dataset.noise_percent=0.5 topk=0.5
  7. python tune_full_pipeline.py model=mlp model_downstream=mlp dataset=california_housing name=deep_lasso_mlp hyp=hyp_for_neural_network hyp_downstream=hyp_for_neural_network dataset.add_noise=secondorder_feats dataset.noise_percent=0.5 hyp.regularization='deep_lasso' topk=0.5
  8. python tune_full_pipeline.py model=mlp model_downstream=mlp dataset=california_housing name=1l_lasso_mlp hyp=hyp_for_neural_network hyp_downstream=hyp_for_neural_network dataset.add_noise=secondorder_feats dataset.noise_percent=0.5 hyp.regularization='first_lasso' topk=0.5
  9. python tune_full_pipeline.py model=xgboost model_downstream=mlp dataset=california_housing name=xgboost_mlp hyp=hyp_for_xgboost hyp_downstream=hyp_for_neural_network dataset.add_noise=secondorder_feats dataset.noise_percent=0.5 topk=0.5
  10. python tune_full_pipeline.py model=mlp model_downstream=xgboost dataset=california_housing name=deep_lasso_mlp hyp=hyp_for_neural_network hyp_downstream=hyp_for_xgboost dataset.add_noise=secondorder_feats dataset.noise_percent=0.5 hyp.regularization='deep_lasso' topk=0.5
  11. python tune_full_pipeline.py model=mlp model_downstream=xgboost dataset=california_housing name=deep_lasso_mlp hyp=hyp_for_neural_network hyp_downstream=hyp_for_xgboost dataset.add_noise=corrupted_feats dataset.noise_percent=0.5 hyp.regularization='deep_lasso' topk=0.5
  12. python tune_full_pipeline.py model=mlp model_downstream=xgboost dataset=california_housing name=deep_lasso_mlp hyp=hyp_for_neural_network hyp_downstream=hyp_for_xgboost dataset.add_noise=random_feats dataset.noise_percent=0.5 hyp.regularization='deep_lasso' topk=0.5
  13. python tune_full_pipeline.py model=ft_transformer_attention_map model_downstream=ft_transformer dataset=california_housing name=am_fttransformer hyp=hyp_for_neural_network hyp_downstream=hyp_for_neural_network dataset.add_noise=secondorder_feats dataset.noise_percent=0.5 topk=0.5
  14. python tune_full_pipeline.py model=ft_transformer_attention_map model_downstream=ft_transformer dataset=california_housing name=am_fttransformer hyp=hyp_for_neural_network hyp_downstream=hyp_for_neural_network dataset.add_noise=corrupted_feats dataset.noise_percent=0.5 topk=0.5
  15. python tune_full_pipeline.py model=ft_transformer_attention_map model_downstream=ft_transformer dataset=california_housing name=am_fttransformer hyp=hyp_for_neural_network hyp_downstream=hyp_for_neural_network dataset.add_noise=random_feats dataset.noise_percent=0.5 topk=0.5
  16. python tune_full_pipeline.py model=lassonet model_downstream=mlp dataset=california_housing name=lassonet hyp=hyp_for_lassonet hyp_downstream=hyp_for_neural_network dataset.add_noise=random_feats dataset.noise_percent=0.5 topk=0.5
  17. python tune_full_pipeline.py model=lassonet model_downstream=mlp dataset=california_housing name=lassonet hyp=hyp_for_lassonet hyp_downstream=hyp_for_neural_network dataset.add_noise=corrupted_feats dataset.noise_percent=0.5 topk=0.5
  18. python tune_full_pipeline.py model=lassonet model_downstream=mlp dataset=california_housing name=lassonet hyp=hyp_for_lassonet hyp_downstream=hyp_for_neural_network dataset.add_noise=secondorder_feats dataset.noise_percent=0.5 topk=0.5
  19. python run_full_pipeline.py --multirun model=mlp model_downstream=mlp dataset=california_housing name=deep_lasso_mlp hyp=hyp_for_neural_network hyp_downstream=hyp_for_neural_network dataset.add_noise=corrupted_feats dataset.noise_percent=0.5 hyp.regularization='deep_lasso' topk=0.5 hyp.seed=0,1,2,3,4,5,6,7,8,9
  20. python run_full_pipeline.py --multirun model=mlp model_downstream=mlp dataset=california_housing name=1l_lasso_mlp hyp=hyp_for_neural_network hyp_downstream=hyp_for_neural_network dataset.add_noise=corrupted_feats dataset.noise_percent=0.5 hyp.regularization='first_lasso' topk=0.5 hyp.seed=0,1,2,3,4,5,6,7,8,9
  21. python run_full_pipeline.py --multirun model=xgboost model_downstream=mlp dataset=california_housing name=xgboost_mlp hyp=hyp_for_xgboost hyp_downstream=hyp_for_neural_network dataset.add_noise=corrupted_feats dataset.noise_percent=0.5 topk=0.5 hyp.seed=0,1,2,3,4,5,6,7,8,9
  22. python run_full_pipeline.py --multirun model=mlp model_downstream=mlp dataset=california_housing name=deep_lasso_mlp hyp=hyp_for_neural_network hyp_downstream=hyp_for_neural_network dataset.add_noise=random_feats dataset.noise_percent=0.5 hyp.regularization='deep_lasso' topk=0.5 hyp.seed=0,1,2,3,4,5,6,7,8,9
  23. python run_full_pipeline.py --multirun model=mlp model_downstream=mlp dataset=california_housing name=1l_lasso_mlp hyp=hyp_for_neural_network hyp_downstream=hyp_for_neural_network dataset.add_noise=random_feats dataset.noise_percent=0.5 hyp.regularization='first_lasso' topk=0.5 hyp.seed=0,1,2,3,4,5,6,7,8,9
  24. python run_full_pipeline.py --multirun model=xgboost model_downstream=mlp dataset=california_housing name=xgboost_mlp hyp=hyp_for_xgboost hyp_downstream=hyp_for_neural_network dataset.add_noise=random_feats dataset.noise_percent=0.5 topk=0.5 hyp.seed=0,1,2,3,4,5,6,7,8,9
  25. python run_full_pipeline.py --multirun model=mlp model_downstream=mlp dataset=california_housing name=deep_lasso_mlp hyp=hyp_for_neural_network hyp_downstream=hyp_for_neural_network dataset.add_noise=secondorder_feats dataset.noise_percent=0.5 hyp.regularization='deep_lasso' topk=0.5 hyp.seed=0,1,2,3,4,5,6,7,8,9
  26. python run_full_pipeline.py --multirun model=mlp model_downstream=mlp dataset=california_housing name=1l_lasso_mlp hyp=hyp_for_neural_network hyp_downstream=hyp_for_neural_network dataset.add_noise=secondorder_feats dataset.noise_percent=0.5 hyp.regularization='first_lasso' topk=0.5 hyp.seed=0,1,2,3,4,5,6,7,8,9
  27. python run_full_pipeline.py --multirun model=xgboost model_downstream=mlp dataset=california_housing name=xgboost_mlp hyp=hyp_for_xgboost hyp_downstream=hyp_for_neural_network dataset.add_noise=secondorder_feats dataset.noise_percent=0.5 topk=0.5 hyp.seed=0,1,2,3,4,5,6,7,8,9
  28. python run_full_pipeline.py --multirun model=ft_transformer_attention_map model_downstream=ft_transformer dataset=california_housing name=am_fttransformer hyp=hyp_for_neural_network hyp_downstream=hyp_for_neural_network dataset.add_noise=secondorder_feats dataset.noise_percent=0.5 topk=0.5 hyp.seed=0,1,2,3,4,5,6,7,8,9
  29. python run_full_pipeline.py --multirun model=ft_transformer_attention_map model_downstream=ft_transformer dataset=california_housing name=am_fttransformer hyp=hyp_for_neural_network hyp_downstream=hyp_for_neural_network dataset.add_noise=corrupted_feats dataset.noise_percent=0.5 topk=0.5 hyp.seed=0,1,2,3,4,5,6,7,8,9
  30. python run_full_pipeline.py --multirun model=ft_transformer_attention_map model_downstream=ft_transformer dataset=california_housing name=am_fttransformer hyp=hyp_for_neural_network hyp_downstream=hyp_for_neural_network dataset.add_noise=random_feats dataset.noise_percent=0.5 topk=0.5 hyp.seed=0,1,2,3,4,5,6,7,8,9
  31. python run_full_pipeline.py --multirun model=lassonet model_downstream=mlp dataset=california_housing name=lassonet hyp=hyp_for_lassonet hyp_downstream=hyp_for_neural_network dataset.add_noise=random_feats dataset.noise_percent=0.5 topk=0.5 hyp.seed=0,1,2,3,4,5,6,7,8,9
  32. python run_full_pipeline.py --multirun model=lassonet model_downstream=mlp dataset=california_housing name=lassonet hyp=hyp_for_lassonet hyp_downstream=hyp_for_neural_network dataset.add_noise=corrupted_feats dataset.noise_percent=0.5 topk=0.5 hyp.seed=0,1,2,3,4,5,6,7,8,9
  33. python run_full_pipeline.py --multirun model=lassonet model_downstream=mlp dataset=california_housing name=lassonet hyp=hyp_for_lassonet hyp_downstream=hyp_for_neural_network dataset.add_noise=secondorder_feats dataset.noise_percent=0.5 topk=0.5 hyp.seed=0,1,2,3,4,5,6,7,8,9

最后的结果如下表:

noise_corrupted_featsnoise_random_featsnoise_secondorder_feats
1l_lasso_mlp-0.443401089-0.444775664-0.442463761
am_fttransformer-0.421166045-0.425570707-0.425962898
deep_lasso_mlp-0.453364613-0.448686116-0.439467445
lassonet_mlp-0.448687207-0.456497796-0.449663611
xgboost_mlp-0.452297124-0.449051298-0.456485655

笔者对California Housing数据集进行了复现。在下游模型中,FT-Transformer的效果似乎要优于其他模型。但是由于上下游模型类型不同,所以FT_Transformer那一行实际上并不能和其他实验的结果简单相比。

而对于MLP作下游模型来说,首层Lasso的特征选择方法在损坏特征噪音与随机特征噪音中药优于其他模型的结果,而在二阶特征中,Deep_Lasso的效果则要优于其他模型。

声明:本文内容由网友自发贡献,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:【wpsshop博客】
推荐阅读
相关标签
  

闽ICP备14008679号