赞
踩
练习地址:https://www.kaggle.com/c/ds100fa19
相关博文:
[Kaggle] Spam/Ham Email Classification 垃圾邮件分类(RNN/GRU/LSTM)
[Kaggle] Spam/Ham Email Classification 垃圾邮件分类(BERT)
import pandas as pd
import spacy
train = pd.read_csv("train.csv")
test = pd.read_csv("test.csv")
train.head(10)
train = train.fillna(" ")
test = test.fillna(" ")
注意处理下 NaN
, 否则后续会报错,见链接:
spacy 报错 gold.pyx in spacy.gold.GoldParse.init() 解决方案https://michael.blog.csdn.net/article/details/109106806
train['all'] = train['subject']+train['email']
train['label'] = [{"spam": bool(y), "ham": not bool(y)}
for y in train.spam.values]
train.head(10)
标签不是很懂为什么这样,可能spacy要求这种格式的标签
from sklearn.model_selection import StratifiedShuffleSplit
# help(StratifiedShuffleSplit)
splt = StratifiedShuffleSplit(n_splits=1, test_size=0.2, random_state=1)
for train_idx, valid_idx in splt.split(train, train['spam']):
# 按照后者分层抽样
train_set = train.iloc[train_idx]
valid_set = train.iloc[valid_idx]
# 查看分布
print(train_set['spam'].value_counts()/len(train_set))
print(valid_set['spam'].value_counts()/len(valid_set))
输出:显示两种数据集的标签分布是几乎相同的
0 0.743636
1 0.256364
Name: spam, dtype: float64
0 0.743713
1 0.256287
Name: spam, dtype: float64
train_text = train_set['all'].values
train_label = train_set['label']
valid_text = valid_set['all'].values
valid_label = valid_set['label']
# 标签还要做以下处理,添加一个 'cats' key,'cats' 也是内置的关键字
train_label = [{"cats": label} for label in train_label]
valid_label = [{"cats": label} for label in valid_label]
# 训练数据打包,再转为list
train_data = list(zip(train_text, train_label))
test_text = (test['subject']+test['email']).values
print(train_label[0])
输出:
{'cats': {'spam': False, 'ham': True}}
nlp = spacy.blank('en') # 建立空白的英语模型
email_cat = nlp.create_pipe('textcat',
# config=
# {
# "exclusive_classes": True, # 排他的,二分类
# "architecture": "bow"
# }
)
# 参数 'textcat' 不能随便写,是接口内置的 字符串
# 上面的 config 不要也可以,没找到文档说明,该怎么配置
help(nlp.create_pipe)
nlp.add_pipe(email_cat)
# 注意顺序,ham是 0, spam 是 1
email_cat.add_label('ham')
email_cat.add_label('spam')
from spacy.util import minibatch
import random
def train(model, train, optimizer, batch_size=8):
loss = {}
random.seed(1)
random.shuffle(train) # 随机打乱
batches = minibatch(train, size=batch_size) # 数据分批
for batch in batches:
text, label = zip(*batch)
model.update(text, label, sgd=optimizer, losses=loss)
return loss
def predict(model, text):
docs = [model.tokenizer(txt) for txt in text] # 先把文本令牌化
emailpred = model.get_pipe('textcat')
score, _ = emailpred.predict(docs)
pred_label = score.argmax(axis=1)
return pred_label
def evaluate(model, text, label):
pred = predict(model, text)
true_class = [int(lab['cats']['spam']) for lab in label]
correct = (pred == true_class)
acc = sum(correct)/len(correct) # 准确率
return acc
n = 20
opt = nlp.begin_training() # 定义优化器
for i in range(n):
loss = train(nlp, train_data, opt)
acc = evaluate(nlp, valid_text, valid_label)
print(f"Loss: {loss['textcat']:.3f} \t Accuracy: {acc:.3f}")
输出:
Loss: 1.132 Accuracy: 0.941 Loss: 0.283 Accuracy: 0.988 Loss: 0.121 Accuracy: 0.993 Loss: 0.137 Accuracy: 0.993 Loss: 0.094 Accuracy: 0.982 Loss: 0.069 Accuracy: 0.995 Loss: 0.060 Accuracy: 0.990 Loss: 0.010 Accuracy: 0.992 Loss: 0.004 Accuracy: 0.992 Loss: 0.004 Accuracy: 0.992 Loss: 0.004 Accuracy: 0.992 Loss: 0.004 Accuracy: 0.992 Loss: 0.004 Accuracy: 0.992 Loss: 0.004 Accuracy: 0.991 Loss: 0.004 Accuracy: 0.991 Loss: 0.308 Accuracy: 0.981 Loss: 0.158 Accuracy: 0.987 Loss: 0.014 Accuracy: 0.990 Loss: 0.007 Accuracy: 0.990 Loss: 0.043 Accuracy: 0.990
pred = predict(nlp, test_text)
id = test['id']
output = pd.DataFrame({'id':id, 'Class':pred})
output.to_csv("submission.csv", index=False)
模型在测试集的准确率是99%以上!
我的CSDN博客地址 https://michael.blog.csdn.net/
长按或扫码关注我的公众号(Michael阿明),一起加油、一起学习进步!
Copyright © 2003-2013 www.wpsshop.cn 版权所有,并保留所有权利。