当前位置:   article > 正文

NLP的Token embedding和位置embedding

NLP的Token embedding和位置embedding

例1

Token Enbedding,也是字符转向量的一种常用做法。

  1. import tensorflow as tf
  2. model_name = "ted_hrlr_translate_pt_en_converter"
  3. tokenizers = tf.saved_model.load(model_name)
  4. sentence = "este é um problema que temos que resolver."
  5. sentence = tf.constant(sentence)
  6. sentence = sentence[tf.newaxis]
  7. sentence = tokenizers.pt.tokenize(sentence).to_tensor()
  8. print(sentence.shape)
  9. print(sentence)

(1, 11)
tf.Tensor([[  2 125  44  85 231  84 130  84 742  16   3]], shape=(1, 11), dtype=int64)

  1. start_end = tokenizers.en.tokenize([''])[0]
  2. print(start_end)
  3. start = start_end[0][tf.newaxis]
  4. print(start)
  5. end = start_end[1][tf.newaxis]
  6. print(end)

tf.Tensor([2 3], shape=(2,), dtype=int64)
tf.Tensor([2], shape=(1,), dtype=int64)
tf.Tensor([3], shape=(1,), dtype=int64)

token这个词有占用的意思,即该向量被该词占用。

例2

和例1一样是个葡萄牙语翻译为英语的例子

  1. import logging
  2. import tensorflow_datasets as tfds
  3. logging.getLogger('tensorflow').setLevel(logging.ERROR) # suppress warnings
  4. import tensorflow as tf
  5. examples, metadata = tfds.load('ted_hrlr_translate/pt_to_en', with_info=True,
  6. as_supervised=True)
  7. train_examples, val_examples = examples['train'], examples['validation']
  8. for pt_examples, en_examples in train_examples.batch(3).take(1):
  9. for pt in pt_examples.numpy():
  10. print(pt.decode('utf-8'))
  11. for en in en_examples.numpy():
  12. print(en.decode('utf-8'))
  13. model_name = "ted_hrlr_translate_pt_en_converter"
  14. tokenizers = tf.saved_model.load(model_name)
  15. encoded = tokenizers.en.tokenize(en_examples)
  16. for row in encoded.to_list():
  17. print(row)
  18. round_trip = tokenizers.en.detokenize(encoded)
  19. for line in round_trip.numpy():
  20. print(line.decode('utf-8'))

e quando melhoramos a procura , tiramos a única vantagem da impressão , que é a serendipidade .
mas e se estes fatores fossem ativos ?
mas eles não tinham a curiosidade de me testar .
and when you improve searchability , you actually take away the one advantage of print , which is serendipity .
but what if it were active ?
but they did n't test for curiosity .
[2, 72, 117, 79, 1259, 1491, 2362, 13, 79, 150, 184, 311, 71, 103, 2308, 74, 2679, 13, 148, 80, 55, 4840, 1434, 2423, 540, 15, 3]
[2, 87, 90, 107, 76, 129, 1852, 30, 3]
[2, 87, 83, 149, 50, 9, 56, 664, 85, 2512, 15, 3]
and when you improve searchability , you actually take away the one advantage of print , which is serendipity .
but what if it were active ?
but they did n ' t test for curiosity .

  1. tokens = tokenizers.en.lookup(encoded)
  2. print(tokens)

<tf.RaggedTensor [[b'[START]', b'and', b'when', b'you', b'improve', b'search', b'##ability', b',', b'you', b'actually', b'take', b'away', b'the', b'one', b'advantage', b'of', b'print', b',', b'which', b'is', b's', b'##ere', b'##nd', b'##ip', b'##ity', b'.', b'[END]'], [b'[START]', b'but', b'what', b'if', b'it', b'were', b'active', b'?', b'[END]'], [b'[START]', b'but', b'they', b'did', b'n', b"'", b't', b'test', b'for', b'curiosity', b'.', b'[END]']]>

例3

embedding——嵌入式,可以理解为低位信息嵌入至高维空间。

  1. import tensorflow as tf
  2. model_name = "ted_hrlr_translate_pt_en_converter"
  3. tokenizers = tf.saved_model.load(model_name)
  4. d_model = 128
  5. input_vocab_size=tokenizers.pt.get_vocab_size().numpy()
  6. embedding = tf.keras.layers.Embedding(input_vocab_size, d_model)
  7. x = tf.constant([[2, 87, 90, 107, 76, 129, 1852, 30,0, 0, 0, 3]])
  8. x = embedding(x)
  9. print(input_vocab_size)
  10. print(x.shape)
  11. print(x)

7765
(1, 12, 128)
tf.Tensor(
[[[-0.02317628  0.04599813 -0.0104699  ... -0.03233253 -0.02013252
    0.00171118]
  [-0.02195768  0.0341222   0.00689759 ... -0.00260416  0.02308804
    0.03915772]
  [-0.00282265  0.03714179 -0.03591241 ... -0.03974506 -0.04376533
    0.03113948]
  ...
  [-0.0277048  -0.03750116 -0.03355522 ... -0.00703954 -0.02855991
    0.00357056]
  [-0.0277048  -0.03750116 -0.03355522 ... -0.00703954 -0.02855991
    0.00357056]
  [ 0.04611469  0.04663144  0.02595479 ... -0.03400488 -0.00206001
   -0.03282105]]], shape=(1, 12, 128), dtype=float32)

此例将文本长度为12的向量embedding为高维12×128

例4

transformer的位置embedding,实际算法通常根据深度d_model先计算好1000个位置编码,而计算时根据实时的输入长度截取

  1. import numpy as np
  2. import tensorflow as tf
  3. d_model = 128
  4. position = 1000
  5. def get_angles(pos, i, d_model):
  6. angle_rates = 1 / np.power(10000, (2 * (i//2)) / np.float32(d_model))
  7. return pos * angle_rates
  8. def positional_encoding(position, d_model):
  9. angle_rads = get_angles(np.arange(position)[:, np.newaxis],
  10. np.arange(d_model)[np.newaxis, :],
  11. d_model)
  12. # apply sin to even indices in the array; 2i
  13. angle_rads[:, 0::2] = np.sin(angle_rads[:, 0::2])
  14. # apply cos to odd indices in the array; 2i+1
  15. angle_rads[:, 1::2] = np.cos(angle_rads[:, 1::2])
  16. pos_encoding = angle_rads[np.newaxis, ...]
  17. return tf.cast(pos_encoding, dtype=tf.float32)
  18. x = tf.constant([[2, 87, 90, 107, 76, 129, 1852, 30,0, 0, 0, 3]])
  19. seq_len = tf.shape(x)[1]
  20. print(seq_len)
  21. pos_encoding = positional_encoding(position, d_model)
  22. print(pos_encoding.shape)
  23. pe = pos_encoding[:, :seq_len, :]
  24. print(pe.shape)

tf.Tensor(12, shape=(), dtype=int32)
(1, 1000, 128)
(1, 12, 128)

声明:本文内容由网友自发贡献,不代表【wpsshop博客】立场,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:https://www.wpsshop.cn/w/盐析白兔/article/detail/466869
推荐阅读
相关标签
  

闽ICP备14008679号