赞
踩
目录
(2)将文本转成变量,这里为了好计算,我只选了新闻标题做文本分类
实现环境:AI studio
文本分类任务步骤通常是
文本预处理的方法很多,类似于词性分析,句法分析,命名实体识别等,在进行文本分类之前,需要将文本进行结构化,常见的方法有one-hot,n_gram,word2vec等,与英文不同(可以简单用空格和符号进行分词),中文是比较紧密连接的,结构化之前需要对文本进行分词,如jieba分词,此外还需要将分词之后的语料转化为ID序列,然后进行词嵌入(word Embedding)。
适合文本的dl model有RNN,LSTM,GRU等。
训练完,然后对数据进行预测。
教程:https://aistudio.baidu.com/aistudio/projectdetail/1735533?forkThirdPart=1
!unzip /home/aistudio/data/data8164/THUCNews.zip
解压之后结果有14个类别,文件夹里边是新闻文本:
(后续觉得数据量太小,就选了全量数据来做,遇到了正则表达式处理慢和分词慢的情况,推荐FlashText,中英文符号处理)
- def read_data(file_dir):
- z_list = []
- for parent, dirnames, filenames in os.walk(file_dir):
- if parent==file_dir:
- print('父目录')
- else:
- for curDir, dirs, files in os.walk(parent):
-
- print("当前文件夹:",curDir)
- label = curDir.split('/')[-1]
- title_list = []
-
- for i in range(len(files)):
-
- fo = open(parent+'/'+files[i],'r',encoding='utf-8')
- title = fo.readline().strip()
- title_list.append((title,label))
- z_list.extend(title_list)
- return z_list
- file_dir = '/home/aistudio/THUCNews'
- data = read_data(file_dir)
- jieba_c_list = []
- #停用词字典
- stopwords = {}.fromkeys([ line.rstrip() for line in open('/home/aistudio/stopwords.txt') ])
- for i in range(len(data)):
- s_t = data[i][0].replace(' ','')
- s_t = re.sub(r'\d','',s_t)
- s_t = re.sub(r'\((.*?)\)', '', s_t)
- s_t = re.sub(r'\,', '', s_t)
- s_t = re.sub(r'\,', '', s_t)
- s_t = re.sub(r'\.', '', s_t)
- s_t = re.sub(r'\%', '', s_t)
- s_t = re.sub(r'\:', '', s_t)
- s_t = re.sub(r'\《', '', s_t)
- s_t = re.sub(r'\》', '', s_t)
- s_list = jieba.cut(s_t,cut_all=False)
- final = []
- for seg in s_list:
- # seg = seg.encode('gbk')
- # print(seg)
- if seg not in stopwords:
- # print(seg)
- final.append(seg)
- jieba_c_list.append((final,data[i][1]))
- lac = paddlehub.Module(name='lac')
- lac_list = []
- for i in range(len(data)):
- s_t = data[i][0].replace(' ','')
- s_t = re.sub(r'\d','',s_t)
- s_t = re.sub(r'\((.*?)\)', '', s_t)
- s_t = re.sub(r'\,', '', s_t)
- s_t = re.sub(r'\,', '', s_t)
- s_t = re.sub(r'\.', '', s_t)
- s_t = re.sub(r'\%', '', s_t)
- s_t = re.sub(r'\:', '', s_t)
- s_t = re.sub(r'\《', '', s_t)
- s_t = re.sub(r'\》', '', s_t)
- results = lac.lexical_analysis(texts=[s_t],batch_size=1)
- lac_ = results[0]['word']
- # print(lac_)
- lac_list.append((lac_,data[i][1]))
(1)构建语料字典
- # 构造词典,统计每个词的频率,并根据频率将每个词转换为一个整数id
- def build_dict(corpus):
- word_freq_dict = dict()
- for sentence,_ in corpus:
- for word in sentence:
- if word not in word_freq_dict:
- word_freq_dict[word] = 0
- word_freq_dict[word] += 1
-
- word_freq_dict = sorted(word_freq_dict.items(), key = lambda x:x[1], reverse = True)
-
- word2id_dict = dict()
- word2id_freq = dict()
-
- # 一般来说,我们把oov和pad放在词典前面,给他们一个比较小的id,这样比较方便记忆,并且易于后续扩展词表
- # word2id_dict['[oov]'] = 0
- # word2id_freq[0] = 1e10
-
- # word2id_dict['[pad]'] = 1
- # word2id_freq[1] = 1e10
-
- for word, freq in word_freq_dict:
- word2id_dict[word] = len(word2id_dict)
- word2id_freq[word2id_dict[word]] = freq
-
- return word2id_freq, word2id_dict
-
- word2id_freq, word2id_dict = build_dict(jieba_c_list)
- vocab_size = len(word2id_freq)
- print("there are totoally %d different words in the corpus" % vocab_size)
- for _, (word, word_id) in zip(range(10), word2id_dict.items()):
- print("word %s, its id %d, its word freq %d" % (word, word_id, word2id_freq[word_id]))
(2)ID序列化
- label_dict ={'时政':1,'星座':2,'股票':3,'彩票':4,'科技':5,'娱乐':6,'房产':7,'社会':8,'财经':9,'游戏':10,'体育':11,'时尚':12,'家居':13,'教育':14}
- def convert_corpus_to_id(corpus, word2id_dict):
- data_set = []
- for sentence, sentence_label in corpus:
- # 将句子中的词逐个替换成id,如果句子中的词不在词表内,则替换成oov
- # 这里需要注意,一般来说我们可能需要查看一下test-set中,句子oov的比例,
- # 如果存在过多oov的情况,那就说明我们的训练数据不足或者切分存在巨大偏差,需要调整
- sentence = [word2id_dict[word] if word in word2id_dict \
- else word2id_dict['[oov]'] for word in sentence]
- data_set.append((sentence, label_dict[sentence_label]))
- return data_set
-
- train_corpus = convert_corpus_to_id(jieba_c_list, word2id_dict)
- print("%d tokens in the corpus" % len(train_corpus))
- print(train_corpus[:5])
- from paddlehub.reader.tokenization import load_vocab
-
- label_dict ={'时政':1,'星座':2,'股票':3,'彩票':4,'科技':5,'娱乐':6,'房产':7,'社会':8,'财经':9,'游戏':10,'体育':11,'时尚':12,'家居':13,'教育':14}
-
- # 这是把 中文词语 转化为 词表 中对应 ID 的函数
- def convert_tokens_to_ids(vocab, tokens): # 输入为词表,和要转化的 text
- wids = [] # 初始化一个空的集合,用于存放输出
- #tokens = text.split(" ") # 将传入的 text 用 空格 做分割,变成 词语字符串 的列表
- for token in tokens: # 每次从列表里取出一个 词语
- wid = vocab.get(token, None)
- if not wid:
- wid = vocab["unknown"]
- wids.append(wid)
- return wids
-
- module = paddlehub.Module(name="word2vec_skipgram") # 实例化 word2vec_skipgram 模型
-
- vocab = load_vocab(module.get_vocab_path()) # 获得 词表 字典
-
- # 我们要获取词表,直接利用 paddlehub.reader.tokenization 中的 load_vocab 函数即可
- # load_vocab 函数的输入是具体的词表文件,这里我们用 word2vec_skipgram 附带的词表
- # 模块的实例化对象 module 下,用 get_vocab_path() 方法
- # 该方法可以在指定的 Module 中(这里就是word2vec_skipgram)查找 assets 文件夹下面有没有 vocab.txt 文件
- # 如果找到,则返回该文件的 具体文件路径
- # load_vocab 函数的返回值是一个 字典,里面 key 为 词语,value 是词语对应的 ID
-
- tokens_ids = []
- for item,_ in lac_list:
- item_ids = convert_tokens_to_ids(vocab, item) # 获得组成句子的 词语 的 ID 列表
- tokens_ids.append((item_ids,label_dict[_]))
-
- for i in range(5):
- print("token: %s; id: %s" % (lac_list[i], tokens_ids[i]))
embedding在构建神经网络模型时使用。函数为paddle.nn.Embedding
- Embedding(num_embeddings=vocab_size, embedding_dim=hidden_size, sparse=False,
- weight_attr=paddle.ParamAttr(initializer=paddle.nn.initializer.Uniform(low=-init_scale, high=init_scale)))
Word2Vec有GBOW和skip-gram两种实现方式,实现的库有gensim、paddlehub,这里用gensim实现。
- import gensim
-
-
- #将取字符串,忽略类别(已转成ID序列)
- from gensim.models.word2vec import Word2Vec
-
- num_features = 100 # Word vector dimensionality
- num_workers = 8 # Number of threads to run in parallel
-
- train_texts = list(map(lambda x: list(x.split()), train_texts))
- model = Word2Vec(train_texts, workers=num_workers, size=num_features)
- model.init_sims(replace=True)
-
- # save model
- model.save("./word2vec.bin")
- #也可转换为txt
结果的第一行是(词数量,词向量的维度就是num_features)
第二行就是词向量ID和对应的词向量
- from paddle.nn import LSTM, Embedding, Dropout, Linear
- import paddle.nn.functional as F
-
- # 定义一个用于情感分类的网络实例,SentimentClassifier
- class SentimentClassifier(paddle.nn.Layer):
- def __init__(self, hidden_size, vocab_size, class_num=14, num_steps=128, num_layers=1, init_scale=0.1, dropout=None):
-
- # 参数含义如下:
- # 1.hidden_size,表示embedding-size,hidden和cell向量的维度
- # 2.vocab_size,模型可以考虑的词表大小
- # 3.class_num,情感类型个数,可以是2分类,也可以是多分类
- # 4.num_steps,表示这个情感分析模型最大可以考虑的句子长度
- # 5.num_layers,表示网络的层数
- # 6.init_scale,表示网络内部的参数的初始化范围
- # 长短时记忆网络内部用了很多Tanh,Sigmoid等激活函数,这些函数对数值精度非常敏感,
- # 因此我们一般只使用比较小的初始化范围,以保证效果
-
- super(SentimentClassifier, self).__init__()
- self.hidden_size = hidden_size
- self.vocab_size = vocab_size
- self.class_num = class_num
- self.init_scale = init_scale
- self.num_layers = num_layers
- self.num_steps = num_steps
- self.dropout = dropout
-
- # 声明一个LSTM模型,用来把每个句子抽象成向量
- self.simple_lstm_rnn = LSTM(input_size=hidden_size, hidden_size=hidden_size, num_layers=num_layers)
-
- # 声明一个embedding层,用来把句子中的每个词转换为向量
- self.embedding = Embedding(num_embeddings=vocab_size, embedding_dim=hidden_size, sparse=False,
- weight_attr=paddle.ParamAttr(initializer=paddle.nn.initializer.Uniform(low=-init_scale, high=init_scale)))
-
- # 在得到一个句子的向量表示后,需要根据这个向量表示对这个句子进行分类
- # 一般来说,可以把这个句子的向量表示乘以一个大小为[self.hidden_size, self.class_num]的W参数,
- # 并加上一个大小为[self.class_num]的b参数,从而达到把句子向量映射到分类结果的目的
-
- # 我们需要声明最终在使用句子向量映射到具体情感类别过程中所需要使用的参数
- # 这个参数的大小一般是[self.hidden_size, self.class_num]
- self.cls_fc = Linear(in_features=self.hidden_size, out_features=self.class_num,
- weight_attr=None, bias_attr=None)
- self.dropout_layer = Dropout(p=self.dropout, mode='upscale_in_train')
-
- def forward(self, input, label):
-
- # 首先我们需要定义LSTM的初始hidden和cell,这里我们使用0来初始化这个序列的记忆
- init_hidden_data = np.zeros(
- (self.num_layers, batch_size, embedding_size), dtype='float32')
- init_cell_data = np.zeros(
- (self.num_layers, batch_size, embedding_size), dtype='float32')
-
- # 将这些初始记忆转换为飞桨可计算的向量
- # 设置stop_gradient=True,避免这些向量被更新,从而影响训练效果
- init_hidden = paddle.to_tensor(init_hidden_data)
- init_hidden.stop_gradient = True
- init_cell = paddle.to_tensor(init_cell_data)
- init_cell.stop_gradient = True
-
- init_h = paddle.reshape(
- init_hidden, shape=[self.num_layers, -1, self.hidden_size])
- init_c = paddle.reshape(
- init_cell, shape=[self.num_layers, -1, self.hidden_size])
-
- # 将输入的句子的mini-batch转换为词向量表示
- x_emb = self.embedding(input)
- x_emb = paddle.reshape(
- x_emb, shape=[-1, self.num_steps, self.hidden_size])
- if self.dropout is not None and self.dropout > 0.0:
- x_emb = self.dropout_layer(x_emb)
-
- # 使用LSTM网络,把每个句子转换为向量表示
- rnn_out, (last_hidden, last_cell) = self.simple_lstm_rnn(x_emb, (init_h, init_c))
- last_hidden = paddle.reshape(
- last_hidden[-1], shape=[-1, self.hidden_size])
-
- # 将每个句子的向量表示映射到具体的情感类别上
- projection = self.cls_fc(last_hidden)
- pred = F.softmax(projection, axis=-1)
-
- # 根据给定的标签信息,计算整个网络的损失函数,这里我们可以直接使用分类任务中常使用的交叉熵来训练网络
- loss = F.softmax_with_cross_entropy(
- logits=projection, label=label, soft_label=False)
- loss = paddle.mean(loss)
-
- # 最终返回预测结果pred,和网络的loss
- return pred, loss
(5)训练和预测
- # 编写一个迭代器,每次调用这个迭代器都会返回一个新的batch,用于训练或者预测
- def build_batch(word2id_dict, corpus, batch_size, epoch_num, max_seq_len, shuffle = True, drop_last = True):
-
- # 模型将会接受的两个输入:
- # 1. 一个形状为[batch_size, max_seq_len]的张量,sentence_batch,代表了一个mini-batch的句子。
- # 2. 一个形状为[batch_size, 1]的张量,sentence_label_batch,每个元素都是非0即1,代表了每个句子的情感类别(正向或者负向)
- sentence_batch = []
- sentence_label_batch = []
-
- for _ in range(epoch_num):
-
- #每个epoch前都shuffle一下数据,有助于提高模型训练的效果
- #但是对于预测任务,不要做数据shuffle
- if shuffle:
- random.shuffle(corpus)
-
- for sentence, sentence_label in corpus:
- sentence_sample = sentence[:min(max_seq_len, len(sentence))]
- if len(sentence_sample) < max_seq_len:
- for _ in range(max_seq_len - len(sentence_sample)):
-
- sentence_sample.append(word2id_dict['[pad]'])
-
-
- sentence_sample = [[word_id] for word_id in sentence_sample]
-
- sentence_batch.append(sentence_sample)
- sentence_label_batch.append([sentence_label])
-
- if len(sentence_batch) == batch_size:
- yield np.array(sentence_batch).astype("int64"), np.array(sentence_label_batch).astype("int64")
- sentence_batch = []
- sentence_label_batch = []
- if not drop_last and len(sentence_batch) > 0:
- yield np.array(sentence_batch).astype("int64"), np.array(sentence_label_batch).astype("int64")
-
- #训练预测
- def train():
- step = 0
- sentiment_classifier = SentimentClassifier(
- embedding_size, vocab_size, num_steps=max_seq_len, num_layers=1)
- # 创建优化器Optimizer,用于更新这个网络的参数
- optimizer = paddle.optimizer.Adam(learning_rate=0.01, beta1=0.9, beta2=0.999, parameters= sentiment_classifier.parameters())
-
- sentiment_classifier.train()
- for sentences, labels in build_batch(
- word2id_dict, train_corpus, batch_size, epoch_num, max_seq_len):
-
- sentences_var = paddle.to_tensor(sentences)
- labels_var = paddle.to_tensor(labels)
- pred, loss = sentiment_classifier(sentences_var, labels_var)
-
- # 后向传播
- loss.backward()
- # 最小化loss
- optimizer.step()
- # 清除梯度
- optimizer.clear_grad()
-
- step += 1
- if step % 100 == 0:
- print("step %d, loss %.3f" % (step, loss.numpy()[0]))
-
- # 我们希望在网络训练结束以后评估一下训练好的网络的效果
- # 通过eval()函数,将网络设置为eval模式,在eval模式中,网络不会进行梯度更新
- eval(sentiment_classifier)
-
- def eval(sentiment_classifier):
- sentiment_classifier.eval()
- # 这里我们需要记录模型预测结果的准确率
- # 对于二分类任务来说,准确率的计算公式为:
- # (true_positive + true_negative) /
- # (true_positive + true_negative + false_positive + false_negative)
- tp = 0.
- tn = 0.
- fp = 0.
- fn = 0.
- for sentences, labels in build_batch(
- word2id_dict, test_corpus, batch_size, 1, max_seq_len):
-
- sentences_var = paddle.to_tensor(sentences)
- labels_var = paddle.to_tensor(labels)
-
- # 获取模型对当前batch的输出结果
- pred, loss = sentiment_classifier(sentences_var, labels_var)
-
- # 把输出结果转换为numpy array的数据结构
- # 遍历这个数据结构,比较预测结果和对应label之间的关系,并更新tp,tn,fp和fn
- pred = pred.numpy()
- for i in range(len(pred)):
- if labels[i][0] == 1:
- if pred[i][1] > pred[i][0]:
- tp += 1
- else:
- fn += 1
- else:
- if pred[i][1] > pred[i][0]:
- fp += 1
- else:
- tn += 1
-
- # 输出最终评估的模型效果
- print("the acc in the test set is %.3f" % ((tp + tn) / (tp + tn + fp + fn)))
(全量数据,带新闻内容的)预测准确率在0.682,考虑优化特征向量或者更改其他模型!
参考资料:
https://blog.csdn.net/qq_42067550/article/details/106101183
https://paddleinference.paddlepaddle.org.cn/product_introduction/inference_intro.html
https://blog.csdn.net/xiaoxiaojie521/article/details/97240436
Copyright © 2003-2013 www.wpsshop.cn 版权所有,并保留所有权利。