赞
踩
T5提出一个统一的模型框架,将各种NLP任务都视为Text-to-Text任务,也就是输入为Text,输出也为Text的任务。由此可以方便地评估在阅读理解、摘要生成、文本分类等一系列NLP任务上,不同的模型结构,预训练目标函数,无标签数据集等的影响。
对于模型的输入,比如英德翻译任务,只需将训练数据集的输入部分前加上translate English to German即可。假设需要翻译That is good,那么先转换成translate English to German:That is good.输入模型,之后就可以直接输出德语翻译Das ist gut.。对于需要输出连续值的STS(文本语义相似度任务),也是直接输出文本。通过这样的方式就能将NLP任务都转换成Text-to-Text形式,也就可以用同样的模型,同样的损失函数,同样的训练过程,同样的解码过程来完成所有NLP任务。
而对于模型的输出,比如分类任务(如推断),需要输出"entailment", “neutral”, "contradiction"这三种文本,否则都算错;回归任务输出str(序列)类型的浮点数。
然后是对预训练目标的大范围探索,做的实验如下图所示:
总共从四个层面来进行比较:
第一个方面,高层次方法(自监督的预训练方法)对比,总共三种方法:
语言模型式,就是 GPT-2 那种方式,从左到右预测。
BERT-style式,就是像BERT一样将一部分给破坏掉,然后还原出来,其效果最好。
Deshuffling(顺序还原)式,就是将文本打乱,然后还原出来。
第二方面,对文本一部分进行破坏时的策略,也分三种方法:
Mask法,如现在大多模型的做法,将被破坏token换成特殊符如[M]。
Replace span法,可以当作是把上面 Mask 法中相邻 [M] 都合成了一个特殊符,每一小段替换一个特殊符,提高计算效率,其效果最好。
Drop法,没有替换操作,直接随机丢弃一些字符。
第三方面,对文本进行多大程度的破坏,挑了 4 个值:10%,15%,25%,50%,最后发现 BERT 的 15%效果最好。
第四方面,Replace Span需要决定对大概多长的小段进行破坏,于是对不同长度进行探索:2,3,5,10这四个值,最后发现3效果最好。
最后就是结合上面所有实验结果,训练了不同规模几个模型,由小到大:
不同于BERT(encoder结构)、GPT(decoder结构),T5采用的是Transformer的encoder-decoder结构,注意,下图是Transformer的encoder-decoder模型,并不是T5的encoder-decoder模型,T5的encoder-decoder模型与Transformer的encoder-decoder有一定的区别。
在T5中编码器和解码器层成为块(block),子层为包含自注意层和前馈网络层的子组件。不同于Transformer,T5模型对每个子组件进行layer norm,并且layer norm采用的计算公式也不一样。
常规的layer norm的计算公式:
对应的tensorflow的函数为tf.keras.layers.LayerNormalization(),函数具体的细节可以查看链接地址:
在T5中layer norm的计算公式:
对应于hugging face t5模型相应的代码:
class TFT5LayerNorm(layers.Layer):
def __init__(self, epsilon=1e-6, **kwargs):
super().__init__(**kwargs)
self.variance_epsilon = epsilon
def build(self, input_shape):
self.weight = self.add_weight("weight", shape=(input_shape[-1],), initializer="ones")
super().build(input_shape)
def call(self, hidden_states):
variance = tf.math.reduce_mean(tf.math.square(hidden_states), axis=-1, keepdims=True)
hidden_states = hidden_states * tf.math.rsqrt(variance + self.variance_epsilon)
return self.weight * hidden_states
另外,在layer norm之前,有一个skip residual connection,第一个weigth layer就是attention层,第二个weight layer就是ff层。也就是说,T5不同于Transformer,它是针对块中的子组件进行residual connection和layer norm。
Transformer采用的是绝对位置编码,BERT中采用的是可学习的绝对位置编码,TransformerXL采用的是相对位置编码,而T5中采用的简化版相对位置编码,计算公式为:
论文地址:https://arxiv.org/abs/1910.10683
代码地址:https://github.com/google-research/text-to-text-transfer-transformer
对应代码为:
class TFT5Attention(layers.Layer):
NEW_ID = itertools.count()
def __init__(self, config, has_relative_attention_bias=False, **kwargs):
super().__init__(**kwargs)
self.layer_id = next(TFT5Attention.NEW_ID)
self.is_decoder = config.is_decoder
self.use_cache = config.use_cache
self.has_relative_attention_bias = has_relative_attention_bias
self.output_attentions = config.output_attentions
self.relative_attention_num_buckets = config.relative_attention_num_buckets
self.relative_attention_max_distance = config.relative_attention_max_distance
self.d_model = config.d_model
self.key_value_proj_dim = config.d_kv
self.n_heads = config.num_heads
self.inner_dim = self.n_heads * self.key_value_proj_dim
q_initializer = tf.keras.initializers.RandomNormal(
mean=0, stddev=config.initializer_factor * ((self.inner_dim * self.key_value_proj_dim) ** -0.5)
)
k_initializer = tf.keras.initializers.RandomNormal(
mean=0, stddev=config.initializer_factor * (self.inner_dim**-0.5)
)
v_initializer = tf.keras.initializers.RandomNormal(
mean=0, stddev=config.initializer_factor * (self.inner_dim**-0.5)
)
o_initializer = tf.keras.initializers.RandomNormal(
mean=0, stddev=config.initializer_factor * (self.inner_dim**-0.5)
)
self.relative_attention_bias_initializer = tf.keras.initializers.RandomNormal(
mean=0, stddev=config.initializer_factor * (self.inner_dim**-0.5)
)
self.q = tf.keras.layers.Dense(
self.inner_dim, use_bias=False, name="q", kernel_initializer=q_initializer
)
self.k = tf.keras.layers.Dense(
self.inner_dim, use_bias=False, name="k", kernel_initializer=k_initializer
)
self.v = tf.keras.layers.Dense(
self.inner_dim, use_bias=False, name="v", kernel_initializer=v_initializer
)
self.o = tf.keras.layers.Dense(
self.d_model, use_bias=False, name="o", kernel_initializer=o_initializer
)
self.dropout = tf.keras.layers.Dropout(config.dropout_rate)
def build(self, input_shape):
if self.has_relative_attention_bias:
with tf.name_scope("relative_attention_bias"):
self.relative_attention_bias = self.add_weight(
name="enbeddings",
shape=[self.relative_attention_num_buckets, self.n_heads],
initializer=self.relative_attention_bias_initializer
)
return super().build(input_shape)
def call(
self,
hidden_states,
mask=None,
key_value_states=None,
position_bias=None,
past_key_value=None,
layer_head_mask=None,
query_length=None,
use_cache=False,
training=False,
output_attentions=False
):
# hidden_states
# shape: (batch_size, seq_length, hidden_dim)
batch_size, seq_length = shape_list(hidden_states)[: 2]
real_seq_length = seq_length
# past_key_value
# shape: (4, batch_size, n_heads, seq_length - 1, hidden_dim)
# 如果past_key_value不为空,说明在推理阶段,此时hidden_states的seq_length=1
# 那么real_seq_length = query_length = seq_length - 1 + 1
if past_key_value is not None:
real_seq_length += shape_list(past_key_value[0])[2] if query_length is None else query_length
# key_value_states也就是encoder_hidden_states
# shape: (batch_size, seq_length, hidden_dim)
# 如果是计算cross-attention,那么key_length=shape_list(key_value_states)[1]
# 如果是计算self-attention,那么key_length=real_seq_length
key_length = real_seq_length if key_value_states is None else shape_list(key_value_states)[1]
def shape(hidden_states):
return tf.transpose(
tf.reshape(hidden_states, (batch_size, -1, self.n_heads, self.key_value_proj_dim)),
perm=(0, 2, 1, 3)
)
def unshape(hidden_states):
return tf.reshape(tf.transpose(hidden_states, perm=(0, 2, 1, 3)))
def project(hidden_states, proj_layer, key_value_states, past_key_value):
if key_value_states is None:
# encoder self-attn
hidden_states = shape(proj_layer(hidden_states))
elif past_key_value is None:
# decoder cross-attn
hidden_states = shape(proj_layer(key_value_states))
if past_key_value is not None:
if key_value_states is None:
# decoder self-attn
hidden_states = tf.concat([past_key_value, hidden_states], axis=2)
else:
# decoder cross-attn
hidden_states = past_key_value
return hidden_states
query_states = shape(self.q(hidden_states)) # (batch_size, n_heads, query_length, dim_per_head)
key_states = project(
hidden_states, self.k, key_value_states, past_key_value[0] if past_key_value is not None else None
)
value_states = project(
hidden_states, self.v, key_value_states, past_key_value[1] if past_key_value is not None else None
)
if self.is_decoder and use_cache:
present_key_value_state = (key_states, value_states)
else:
present_key_value_state = None
scores = tf.einsum(
"bnqd,bnkd->bnqk", query_states, key_states
)
if position_bias is None:
if not self.has_relative_attention_bias:
position_bias = tf.zeros((1, self.n_heads, real_seq_length, key_length))
else:
position_bias = self.compute_bias(real_seq_length, key_length)
if past_key_value is not None:
position_bias = position_bias[:, :, -seq_length:, :]
if mask is not None:
position_bias = tf.cast(position_bias, dtype=mask.dtype)
position_bias = position_bias + mask
scores += position_bias
weights = stable_softmax(scores, axis=-1)
weights = self.dropout(weights, training=training)
if layer_head_mask is not None:
weights = tf.reshape(layer_head_mask, (1, -1, 1, 1)) * weights
attn_output = tf.matmul(weights, value_states) # (batch_size, n_heads, query_length, dim_per_head)
attn_output = self.o(unshape(attn_output))
outputs = (attn_output,) + (present_key_value_state,) + (position_bias,)
if output_attentions:
outputs = outputs + (weights,)
return outputs
结合源码,对重要部分作了一些标注。
import copy
import itertools
from dataclasses import dataclass
from typing import Optional, List, Tuple
import tensorflow as tf
from tensorflow.keras import layers
from transformers import shape_list
from transformers.activations_tf import get_tf_activation
from transformers.modeling_tf_utils import get_initializer
from transformers.tf_utils import stable_softmax
from transformers.utils import ModelOutput
class TFT5Model(layers.Layer):
def __init__(self, config, *inputs, **kwargs):
super().__init__(*inputs, **kwargs)
self.shared = TFSharedEmbeddings(config.vocab_size, config.d_model, name="shared")
with tf.compat.v1.variable_scope("shared") as shared_abs_scope_name:
pass
embed_tokens = TFWrappedEmbeddings(self.shared, abs_scope_name=shared_abs_scope_name)
encoder_config = copy.deepcopy(config)
encoder_config.use_cache = False
self.encoder = TFT5MainLayer(encoder_config, embed_tokens, name="encoder")
decoder_config = copy.deepcopy(config)
decoder_config.is_decoder = True
decoder_config.num_layers = config.num_decoder_layers
self.decoder = TFT5MainLayer(decoder_config, embed_tokens, name="decoder")
def call(
self,
input_ids=None,
attention_mask=None,
decoder_input_ids=None,
decoder_attention_mask=None,
head_mask=None,
decoder_head_mask=None,
encoder_outputs=None,
past_key_values=None,
inputs_embeds=None,
decoder_inputs_embeds=None,
use_cache=None,
output_attentions=None,
output_hidden_states=None,
return_dict=None,
training=False
):
if head_mask is not None and decoder_head_mask is None:
decoder_head_mask = head_mask
if encoder_outputs is None:
encoder_outputs = self.encoder(
input_ids,
attention_mask=attention_mask,
encoder_hidden_states=None,
encoder_attention_mask=None,
inputs_embeds=inputs_embeds,
head_mask=head_mask,
past_key_values=None,
use_cache=False,
output_attentions=output_attentions,
output_hidden_states=output_hidden_states,
return_dict=return_dict,
training=training
)
hidden_states = encoder_outputs[0]
decoder_outputs = self.decoder(
decoder_input_ids,
attention_mask=decoder_attention_mask,
encoder_hidden_states=hidden_states,
encoder_attention_mask=attention_mask,
inputs_embeds=decoder_inputs_embeds,
head_mask=decoder_head_mask,
past_key_values=past_key_values,
use_cache=use_cache,
output_attentions=output_attentions,
output_hidden_states=output_hidden_states,
return_dict=return_dict,
training=training
)
past = decoder_outputs[1] if use_cache else None
if not return_dict:
if past is not None:
decoder_outputs = decoder_outputs[: 1] + (past, ) + decoder_outputs[2:]
return decoder_outputs + encoder_outputs
return TFSeq2SeqModelOutput(
last_hidden_state=decoder_outputs.last_hidden_state,
past_key_values=past,
decoder_hidden_states=decoder_outputs.hidden_states,
decoder_attentions=decoder_outputs.attentions,
cross_attentions=decoder_outputs.cross_attentions,
encoder_last_hidden_state=encoder_outputs.last_hidden_state,
encoder_hidden_states=encoder_outputs.hidden_states,
encoder_attentions=encoder_outputs.attentions
)
class TFSharedEmbeddings(layers.Layer):
def __init__(self, vocab_size, hidden_size, initializer_range, **kwargs):
super().__init__(**kwargs)
self.vocab_size = vocab_size
self.hidden_size = hidden_size
self.initializer_range = initializer_range
def build(self, input_shape):
self.weight = self.add_weight(
name="weight",
shape=[self.vocab_size, self.hidden_size],
initializer=get_initializer(self.initializer_range)
)
super().build(input_shape)
def call(self, inputs, mode="embedding"):
if mode == "embedding":
return self._embedding(inputs)
elif mode == "linear":
return self._linear(inputs)
else:
raise ValueError(f"mode {mode} is not valid.")
def _embedding(self, input_ids):
return tf.gather(self.weight, input_ids)
def _linear(self, inputs):
first_dims = shape_list(inputs)[:-1]
x = tf.reshape(inputs, [-1, self.hidden_size])
logits = tf.matmul(x, self.weight, transpose_b=True)
return tf.reshape(logits, first_dims + [self.vocab_size])
class TFWrappedEmbeddings:
def __init__(self, layer, abs_scope_name=None):
self._layer = layer
self._abs_scope_name = abs_scope_name
def call(self, inputs, mode="embedding"):
if self._abs_scope_name is None:
return self._layer.call(inputs, mode)
# if an abs scope name is given to the embedding variable, call variable from absolute scope
with tf.compat.v1.variable_scope(self._abs_scope_name, auxiliary_name_scope=False) as abs_scope_name:
with tf.name_scope(abs_scope_name.original_name_scope):
return self._layer.call(inputs, mode)
def __call__(self, inputs, mode="embedding"):
if self._abs_scope_name is None:
return self._layer(inputs, mode)
# if an abs scope name is given to the embedding variable, call variable from absolute scope
with tf.compat.v1.variable_scope(self._abs_scope_name, auxiliary_name_scope=False) as abs_scope_name:
with tf.name_scope(abs_scope_name.original_name_scope):
return self._layer(inputs, mode)
class TFT5MainLayer(layers.Layer):
def __init__(self, config, embed_tokens=None, **kwargs):
super().__init__(**kwargs)
self.config = config
self.output_hidden_states = config.output_hidden_states
self.output_attention = config.output_attentions
self.use_cache = config.use_cache
self.embed_tokens = embed_tokens
self.is_decoder = config.is_decoder
self.num_hidden_layers = config.num_layers
self.block = [
TFT5Block(config, has_relative_attention_bias=bool(i == 0), name=f"block_._{i}")
for i in range(config.num_layers)
]
self.final_layer_norm = TFT5LayerNorm(epsilon=config.layer_norm_eps, name="final_layer_norm")
self.dropout = layers.Dropout(config.dropout_rate)
def call(
self,
input_ids=None,
attention_mask=None,
encoder_hidden_states=None,
encoder_attention_mask=None,
inputs_embeds=None,
head_mask=None,
encoder_head_mask=None,
past_key_values=None,
use_cache=None,
output_attentions=None,
output_hidden_states=None,
return_dict=None,
training=False
):
if input_ids is not None and inputs_embeds is not None:
err_msg_prefix = "decoder_" if self.is_decoder else ""
raise ValueError(
f"You cannot specify both {err_msg_prefix}input_ids and {err_msg_prefix}inputs_embeds at the same time"
)
elif input_ids is not None:
input_shape = shape_list(input_ids)
input_ids = tf.reshape(input_ids, (-1, input_shape[-1]))
elif inputs_embeds is not None:
input_shape = shape_list(inputs_embeds)[: -1]
else:
err_msg_prefix = "decoder_" if self.is_decoder else ""
raise ValueError(f"You have to specify either {err_msg_prefix}input_ids or {err_msg_prefix}inputs_embeds")
if inputs_embeds is None:
assert self.embed_tokens is not None, "You have to initialize the model with valid token embeddings"
inputs_embeds = self.embed_tokens(input_ids)
batch_size, seq_length = input_shape
# past_key_values
# shape: (n_layers, 4, batch_size, num_heads, seq_length - 1, head_embed_size)
mask_seq_length = (
shape_list(past_key_values[0][0])[2] + seq_length if past_key_values is not None else seq_length
)
if attention_mask is None:
attention_mask = tf.fill((batch_size, mask_seq_length), 1)
if self.is_decoder and encoder_attention_mask is None and encoder_hidden_states is not None:
encoder_seq_length = shape_list(encoder_hidden_states)[1]
encoder_attention_mask = tf.fill((batch_size, encoder_seq_length), 1)
if past_key_values is None:
past_key_values = [None] * len(self.block)
attention_mask = tf.cast(attention_mask, dtype=inputs_embeds.dtype)
num_dims_attention_mask = len(shape_list(attention_mask))
# 如果attention_mask是3D张量
# shape: (batch_size, mask_seq_length, mask_seq_length) -> (batch_size, 1, mask_seq_length, mask_seq_length)
if num_dims_attention_mask == 3:
extended_attention_mask = attention_mask[:, None, :, :]
# 如果attention mask是2D张量
# shape: (batch_size, mask_seq_length) -> (batch_size, 1, 1, mask_seq_length)
elif num_dims_attention_mask == 2:
if self.is_decoder:
seq_ids = tf.range(mask_seq_length)
causal_mask = tf.less_equal(
tf.tile(seq_ids[None, None, :], (batch_size, mask_seq_length, 1)),
seq_ids[None, :, None]
)
causal_mask = tf.cast(causal_mask, dtype=attention_mask.dtype)
extended_attention_mask = causal_mask[:, None, :, :] * attention_mask[:, None, None, :]
if past_key_values[0] is not None:
extended_attention_mask = extended_attention_mask[:, :, -seq_length:, :]
else:
extended_attention_mask = attention_mask[:, None, None, :]
extended_attention_mask = (1.0 - extended_attention_mask) * -1e9
if self.is_decoder and encoder_attention_mask is not None:
encoder_attention_mask = tf.cast(encoder_attention_mask, dtype=extended_attention_mask.dtype)
num_dims_encoder_attention_mask = len(shape_list(encoder_attention_mask))
if num_dims_encoder_attention_mask == 3:
encoder_extended_attention_mask = encoder_attention_mask[:, None, :, :]
if num_dims_encoder_attention_mask == 2:
encoder_extended_attention_mask = encoder_attention_mask[:, None, None, :]
encoder_extended_attention_mask = (1.0 - encoder_extended_attention_mask) * -1e9
else:
encoder_extended_attention_mask = None
present_key_value_states = () if use_cache and self.is_decoder else None
all_hidden_states = () if output_hidden_states else None
all_attentions = () if output_attentions else None
all_cross_attentions = () if (output_attentions and self.is_decoder) else None
position_bias = None
encoder_decoder_position_bias = None
hidden_states = self.dropout(inputs_embeds, training=training)
for idx, (layer_module, past_key_value) in enumerate(zip(self.block, past_key_values)):
if output_hidden_states:
all_hidden_states = all_hidden_states + (hidden_states,)
# layer_outputs is a tuple with:
# hidden_states, past_key_values, (self-attention position bias), (self-attention weights),
# (cross-attention position bias), (cross-attention weights),
layer_outputs = layer_module(
hidden_states,
attention_mask=extended_attention_mask,
position_bias=position_bias,
encoder_hidden_states=encoder_hidden_states,
encoder_attention_mask=encoder_extended_attention_mask,
encoder_decoder_position_bias=encoder_decoder_position_bias,
layer_head_mask=head_mask[idx] if head_mask is not None else None,
encoder_layer_head_mask=encoder_head_mask[idx] if encoder_head_mask is not None else None,
past_key_value=past_key_value,
use_cache=use_cache,
output_attentions=output_attentions,
training=training,
)
hidden_states, present_key_value_state = layer_outputs[: 2]
position_bias = layer_outputs[2]
if self.is_decoder and encoder_hidden_states is not None:
encoder_decoder_position_bias = layer_outputs[4 if output_attentions else 3]
if present_key_value_state is not None and use_cache and self.is_decoder:
present_key_value_states = present_key_value_states + (present_key_value_state, )
if output_attentions:
all_attentions = all_attentions + (layer_outputs[3], )
if self.is_decoder:
all_cross_attentions = all_cross_attentions + (layer_outputs[5], )
hidden_states = self.final_layer_norm(hidden_states)
hidden_states = self.dropout(hidden_states, training=training)
if output_hidden_states:
all_hidden_states = all_hidden_states + (hidden_states,)
if not return_dict:
outputs = (hidden_states,)
# need to check if is decoder here as well for special cases when using keras compile
if use_cache and self.is_decoder:
outputs = outputs + (present_key_value_states,)
if output_hidden_states:
outputs = outputs + (all_hidden_states,)
if output_attentions:
outputs = outputs + (all_attentions,)
if self.is_decoder:
outputs = outputs + (all_cross_attentions,)
return outputs
if self.is_decoder:
return TFBaseModelOutputWithPastAndCrossAttentions(
last_hidden_state=hidden_states,
past_key_values=present_key_value_states,
hidden_states=all_hidden_states,
attentions=all_attentions,
cross_attentions=all_cross_attentions,
)
else:
return TFBaseModelOutput(
last_hidden_state=hidden_states,
hidden_states=all_hidden_states,
attentions=all_attentions,
)
class TFT5Block(layers.Layer):
def __init__(self, config, has_relative_attention_bias=False, **kwargs):
super().__init__(**kwargs)
self.is_decoder = config.is_decoder
self.layer = []
self.layer.append(
TFT5LayerSelfAttention(
config,
has_relative_attention_bias=has_relative_attention_bias,
name="layer_._0"
)
)
if self.is_decoder:
self.layer.append(
TFT5LayerCrossAttention(
config,
name="layer_._1"
)
)
self.layer.append(TFT5LayerFF(config, name=f"layer_._{len(self.layer)}"))
def call(
self,
hidden_states,
attention_mask=None,
position_bias=None,
encoder_hidden_states=None,
encoder_attention_mask=None,
encoder_decoder_position_bias=None,
layer_head_mask=None,
encoder_layer_head_mask=None,
past_key_value=None,
use_cache=False,
output_attentions=False,
training=False
):
if past_key_value is not None:
expected_num_past_key_values = 2 if encoder_hidden_states is None else 4
if len(past_key_value) != expected_num_past_key_values:
raise ValueError(
f"There should be {expected_num_past_key_values} past states. "
f"{'2 (past / key) for cross attention' if expected_num_past_key_values == 4 else ''}."
f"Got {len(past_key_value)} past key / value states"
)
self_attn_past_key_value = past_key_value[: 2]
cross_attn_past_key_value = past_key_value[2:]
else:
self_attn_past_key_value, cross_attn_past_key_value = None, None
self_attention_outputs = self.layer[0](
hidden_states,
attention_mask=attention_mask,
position_bias=position_bias,
layer_head_mask=layer_head_mask,
past_key_value=self_attn_past_key_value,
use_cache=use_cache,
output_attentions=output_attentions,
training=training
)
hidden_states, present_key_value_state = self_attention_outputs[: 2]
attention_outputs = self_attention_outputs[2:]
if self.is_decoder and encoder_hidden_states is not None:
if present_key_value_state is not None:
query_length = shape_list(present_key_value_state[0])[2]
else:
query_length = None
# cross_attention_outputs
# (hidden_states, past_key_values, (cross_attention_position_bias, cross_attention_weights))
cross_attention_outputs = self.layer[1](
hidden_states,
key_value_states=encoder_hidden_states,
attention_mask=encoder_attention_mask,
position_bias=encoder_decoder_position_bias,
layer_head_mask=encoder_layer_head_mask,
past_key_value=cross_attn_past_key_value,
query_length=query_length,
use_cache=use_cache,
output_attentions=output_attentions,
training=training
)
hidden_states = cross_attention_outputs[0]
if present_key_value_state is not None:
present_key_value_state = present_key_value_state + cross_attention_outputs[1]
attention_outputs = attention_outputs + cross_attention_outputs[2:]
hidden_states = self.layer[-1](hidden_states, training=training)
outputs = (hidden_states, )
outputs = outputs + (present_key_value_state, ) + attention_outputs
return outputs
class TFT5LayerSelfAttention(layers.Layer):
def __init__(self, config, has_relative_attention_bias=False, **kwargs):
super().__init__(**kwargs)
self.self_attention = TFT5Attention(
config,
has_relative_attention_bias=has_relative_attention_bias,
name="self_attention"
)
self.layer_norm = TFT5LayerNorm(epsilon=config.layer_norm_epsilon, name="layer_norm")
self.dropout = layers.Dropout(config.dropout_rate)
def call(
self,
hidden_states,
attention_mask=None,
position_bias=None,
layer_head_mask=None,
past_key_value=None,
use_cache=False,
output_attentions=False,
training=False
):
normed_hidden_states = self.layer_norm(hidden_states)
# 如果output_attentions=True,那么attention_output为一个四元组
# (hidden_states, past_key_value, position_bias, attention_weights)
# 否则attention_outputs为一个三元组
# (hidden_states, past_key_value, position_bias)
attention_output = self.self_attention(
normed_hidden_states,
mask=attention_mask,
position_bias=position_bias,
layer_head_mask=layer_head_mask,
past_key_value=past_key_value,
use_cache=use_cache,
output_attentions=output_attentions,
training=training
)
# 残差连接
hidden_states = hidden_states + self.dropout(attention_output[0])
outputs = (hidden_states, ) + attention_output[1:]
return outputs
class TFT5LayerCrossAttention(layers.Layer):
def __init__(self, config, **kwargs):
super().__init__(**kwargs)
self.cross_attention = TFT5Attention(
config,
has_relative_attention_bias=False,
name="cross_attention"
)
self.layer_norm = TFT5LayerNorm(epsilon=config.layer_norm_epsilon, name="layer_norm")
self.dropout = tf.keras.layers.Dropout(config.dropout_rate)
def call(
self,
hidden_states,
key_value_states=None,
attention_mask=None,
position_bias=None,
layer_head_mask=None,
past_key_value=None,
query_length=None,
use_cache=None,
output_attentions=False,
training=False
):
normed_hidden_states = self.layer_norm(hidden_states)
attention_output = self.cross_attention(
normed_hidden_states,
mask=attention_mask,
key_value_states=key_value_states,
position_bias=position_bias,
layer_head_mask=layer_head_mask,
past_key_value=past_key_value,
query_length=query_length,
use_cache=use_cache,
output_attentions=output_attentions,
training=training
)
hidden_states = hidden_states + self.dropout(attention_output[0])
outputs = (hidden_states, ) + attention_output[1:]
return outputs
class TFT5Attention(layers.Layer):
NEW_ID = itertools.count()
def __init__(self, config, has_relative_attention_bias=False, **kwargs):
super().__init__(**kwargs)
self.layer_id = next(TFT5Attention.NEW_ID)
self.is_decoder = config.is_decoder
self.use_cache = config.use_cache
self.has_relative_attention_bias = has_relative_attention_bias
self.output_attentions = config.output_attentions
self.relative_attention_num_buckets = config.relative_attention_num_buckets
self.relative_attention_max_distance = config.relative_attention_max_distance
self.d_model = config.d_model
self.key_value_proj_dim = config.d_kv
self.n_heads = config.num_heads
self.inner_dim = self.n_heads * self.key_value_proj_dim
q_initializer = tf.keras.initializers.RandomNormal(
mean=0, stddev=config.initializer_factor * ((self.inner_dim * self.key_value_proj_dim) ** -0.5)
)
k_initializer = tf.keras.initializers.RandomNormal(
mean=0, stddev=config.initializer_factor * (self.inner_dim**-0.5)
)
v_initializer = tf.keras.initializers.RandomNormal(
mean=0, stddev=config.initializer_factor * (self.inner_dim**-0.5)
)
o_initializer = tf.keras.initializers.RandomNormal(
mean=0, stddev=config.initializer_factor * (self.inner_dim**-0.5)
)
self.relative_attention_bias_initializer = tf.keras.initializers.RandomNormal(
mean=0, stddev=config.initializer_factor * (self.inner_dim**-0.5)
)
self.q = tf.keras.layers.Dense(
self.inner_dim, use_bias=False, name="q", kernel_initializer=q_initializer
)
self.k = tf.keras.layers.Dense(
self.inner_dim, use_bias=False, name="k", kernel_initializer=k_initializer
)
self.v = tf.keras.layers.Dense(
self.inner_dim, use_bias=False, name="v", kernel_initializer=v_initializer
)
self.o = tf.keras.layers.Dense(
self.d_model, use_bias=False, name="o", kernel_initializer=o_initializer
)
self.dropout = tf.keras.layers.Dropout(config.dropout_rate)
def build(self, input_shape):
if self.has_relative_attention_bias:
with tf.name_scope("relative_attention_bias"):
self.relative_attention_bias = self.add_weight(
name="enbeddings",
shape=[self.relative_attention_num_buckets, self.n_heads],
initializer=self.relative_attention_bias_initializer
)
return super().build(input_shape)
def call(
self,
hidden_states,
mask=None,
key_value_states=None,
position_bias=None,
past_key_value=None,
layer_head_mask=None,
query_length=None,
use_cache=False,
training=False,
output_attentions=False
):
# hidden_states
# shape: (batch_size, seq_length, hidden_dim)
batch_size, seq_length = shape_list(hidden_states)[: 2]
real_seq_length = seq_length
# past_key_value
# shape: (4, batch_size, n_heads, seq_length - 1, hidden_dim)
# 如果past_key_value不为空,说明在推理阶段,此时hidden_states的seq_length=1
# 那么real_seq_length = query_length = seq_length - 1 + 1
if past_key_value is not None:
real_seq_length += shape_list(past_key_value[0])[2] if query_length is None else query_length
# key_value_states也就是encoder_hidden_states
# shape: (batch_size, seq_length, hidden_dim)
# 如果是计算cross-attention,那么key_length=shape_list(key_value_states)[1]
# 如果是计算self-attention,那么key_length=real_seq_length
key_length = real_seq_length if key_value_states is None else shape_list(key_value_states)[1]
def shape(hidden_states):
return tf.transpose(
tf.reshape(hidden_states, (batch_size, -1, self.n_heads, self.key_value_proj_dim)),
perm=(0, 2, 1, 3)
)
def unshape(hidden_states):
return tf.reshape(tf.transpose(hidden_states, perm=(0, 2, 1, 3)))
def project(hidden_states, proj_layer, key_value_states, past_key_value):
if key_value_states is None:
# encoder self-attn
hidden_states = shape(proj_layer(hidden_states))
elif past_key_value is None:
# decoder cross-attn
hidden_states = shape(proj_layer(key_value_states))
if past_key_value is not None:
if key_value_states is None:
# decoder self-attn
hidden_states = tf.concat([past_key_value, hidden_states], axis=2)
else:
# decoder cross-attn
hidden_states = past_key_value
return hidden_states
query_states = shape(self.q(hidden_states)) # (batch_size, n_heads, query_length, dim_per_head)
key_states = project(
hidden_states, self.k, key_value_states, past_key_value[0] if past_key_value is not None else None
)
value_states = project(
hidden_states, self.v, key_value_states, past_key_value[1] if past_key_value is not None else None
)
if self.is_decoder and use_cache:
present_key_value_state = (key_states, value_states)
else:
present_key_value_state = None
scores = tf.einsum(
"bnqd,bnkd->bnqk", query_states, key_states
)
if position_bias is None:
if not self.has_relative_attention_bias:
position_bias = tf.zeros((1, self.n_heads, real_seq_length, key_length))
else:
position_bias = self.compute_bias(real_seq_length, key_length)
if past_key_value is not None:
position_bias = position_bias[:, :, -seq_length:, :]
if mask is not None:
position_bias = tf.cast(position_bias, dtype=mask.dtype)
position_bias = position_bias + mask
scores += position_bias
weights = stable_softmax(scores, axis=-1)
weights = self.dropout(weights, training=training)
if layer_head_mask is not None:
weights = tf.reshape(layer_head_mask, (1, -1, 1, 1)) * weights
attn_output = tf.matmul(weights, value_states) # (batch_size, n_heads, query_length, dim_per_head)
attn_output = self.o(unshape(attn_output))
outputs = (attn_output,) + (present_key_value_state,) + (position_bias,)
if output_attentions:
outputs = outputs + (weights,)
return outputs
class TFT5LayerNorm(layers.Layer):
def __init__(self, epsilon=1e-6, **kwargs):
super().__init__(**kwargs)
self.variance_epsilon = epsilon
def build(self, input_shape):
self.weight = self.add_weight("weight", shape=(input_shape[-1],), initializer="ones")
super().build(input_shape)
def call(self, hidden_states):
variance = tf.math.reduce_mean(tf.math.square(hidden_states), axis=-1, keepdims=True)
hidden_states = hidden_states * tf.math.rsqrt(variance + self.variance_epsilon)
return self.weight * hidden_states
class TFT5LayerFF(layers.Layer):
def __init__(self, config, **kwargs):
super().__init__(**kwargs)
if config.feed_forward_proj == "relu":
self.DenseReluDense = TFT5DenseReluDense(config, name="DenseReluDense")
elif config.feed_forward_proj == "gated-gelu":
self.DenseReluDense = TFT5GatedGeluDense(config, name="DenseReluDense")
else:
raise ValueError(
f"{self.config.feed_forward_proj} is not supported. Choose between `relu` and `gated-gelu`"
)
self.layer_norm = TFT5LayerNorm(epsilon=config.layer_norm_epsilon, name="layer_norm")
self.dropout = tf.keras.layers.Dropout(config.dropout_rate)
def call(self, hidden_states, training=False):
normed_hidden_states = self.layer_norm(hidden_states)
dense_output = self.DenseReluDense(normed_hidden_states, training=training)
hidden_states = hidden_states + self.dropout(dense_output, training=training)
return hidden_states
class TFT5DenseReluDense(layers.Layer):
def __init__(self, config, **kwargs):
super().__init__(**kwargs)
wi_initializer = tf.keras.initializers.RandomNormal(
mean=0, stddev=config.initializer_factor * (config.d_model**-0.5)
)
wo_initializer = tf.keras.initializers.RandomNormal(
mean=0, stddev=config.initializer_factor * (config.d_ff**-0.5)
)
self.wi = tf.keras.layers.Dense(
config.d_ff, use_bias=False, name="wi", kernel_initializer=wi_initializer
)
self.wo = tf.keras.layers.Dense(
config.d_model, use_bias=False, name="wo", kernel_initializer=wo_initializer
)
self.dropout = tf.keras.layers.Dropout(config.dropout_rate)
self.act = tf.keras.activations.relu
def call(self, hidden_states, training=False):
hidden_states = self.wi(hidden_states)
hidden_states = self.act(hidden_states)
hidden_states = self.dropout(hidden_states, training=training)
hidden_states = self.wo(hidden_states)
return hidden_states
class TFT5GatedGeluDense(tf.keras.layers.Layer):
def __init__(self, config, **kwargs):
super().__init__(**kwargs)
wi_initializer = tf.keras.initializers.RandomNormal(
mean=0, stddev=config.initializer_factor * (config.d_model**-0.5)
)
wo_initializer = tf.keras.initializers.RandomNormal(
mean=0, stddev=config.initializer_factor * (config.d_ff**-0.5)
)
self.wi_0 = tf.keras.layers.Dense(
config.d_ff, use_bias=False, name="wi_0", kernel_initializer=wi_initializer
)
self.wi_1 = tf.keras.layers.Dense(
config.d_ff, use_bias=False, name="wi_1", kernel_initializer=wi_initializer
)
self.wo = tf.keras.layers.Dense(
config.d_model, use_bias=False, name="wo", kernel_initializer=wo_initializer
)
self.dropout = tf.keras.layers.Dropout(config.dropout_rate)
self.act = get_tf_activation("gelu_new")
def call(self, hidden_states, training=False):
hidden_gelu = self.act(self.wi_0(hidden_states))
hidden_linear = self.wi_1(hidden_states)
hidden_states = hidden_gelu * hidden_linear
hidden_states = self.dropout(hidden_states, training=training)
hidden_states = self.wo(hidden_states)
return hidden_states
@dataclass
class TFSeq2SeqModelOutput(ModelOutput):
last_hidden_state: tf.Tensor = None
past_key_values: Optional[List[tf.Tensor]] = None
decoder_hidden_states: Optional[Tuple[tf.Tensor]] = None
decoder_attentions: Optional[Tuple[tf.Tensor]] = None
cross_attentions: Optional[Tuple[tf.Tensor]] = None
encoder_last_hidden_state: Optional[tf.Tensor] = None
encoder_hidden_states: Optional[Tuple[tf.Tensor]] = None
encoder_attentions: Optional[Tuple[tf.Tensor]] = None
@dataclass
class TFBaseModelOutputWithPastAndCrossAttentions(ModelOutput):
last_hidden_state: tf.Tensor = None
past_key_values: Optional[List[tf.Tensor]] = None
hidden_states: Optional[Tuple[tf.Tensor]] = None
attentions: Optional[Tuple[tf.Tensor]] = None
cross_attentions: Optional[Tuple[tf.Tensor]] = None
@dataclass
class TFBaseModelOutput(ModelOutput):
last_hidden_state: tf.Tensor = None
hidden_states: Optional[Tuple[tf.Tensor]] = None
attentions: Optional[Tuple[tf.Tensor]] = None
Copyright © 2003-2013 www.wpsshop.cn 版权所有,并保留所有权利。