当前位置:   article > 正文

[Kaggle] Spam/Ham Email Classification 垃圾邮件分类(spacy)_spam and ham

spam and ham

练习地址:https://www.kaggle.com/c/ds100fa19
相关博文:
[Kaggle] Spam/Ham Email Classification 垃圾邮件分类(RNN/GRU/LSTM)
[Kaggle] Spam/Ham Email Classification 垃圾邮件分类(BERT)

1. 导入包

import pandas as pd
import spacy
train = pd.read_csv("train.csv")
test = pd.read_csv("test.csv")
  • 1
  • 2
  • 3
  • 4

2. 数据预览

train.head(10)
train = train.fillna(" ")
test = test.fillna(" ")
  • 1
  • 2
  • 3

注意处理下 NaN , 否则后续会报错,见链接:
spacy 报错 gold.pyx in spacy.gold.GoldParse.init() 解决方案https://michael.blog.csdn.net/article/details/109106806

2. 特征组合

  • 对邮件的主题和内容进行组合 + 处理标签
train['all'] = train['subject']+train['email']
train['label'] = [{"spam": bool(y), "ham": not bool(y)}
                  for y in train.spam.values]
train.head(10)
  • 1
  • 2
  • 3
  • 4

标签不是很懂为什么这样,可能spacy要求这种格式的标签

  • 训练集、验证集切分,采用分层抽样
from sklearn.model_selection import StratifiedShuffleSplit
# help(StratifiedShuffleSplit)
splt = StratifiedShuffleSplit(n_splits=1, test_size=0.2, random_state=1)
for train_idx, valid_idx in splt.split(train, train['spam']): 
											# 按照后者分层抽样
    train_set = train.iloc[train_idx]
    valid_set = train.iloc[valid_idx]

# 查看分布
print(train_set['spam'].value_counts()/len(train_set))
print(valid_set['spam'].value_counts()/len(valid_set))
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11

输出:显示两种数据集的标签分布是几乎相同的

0    0.743636
1    0.256364
Name: spam, dtype: float64
0    0.743713
1    0.256287
Name: spam, dtype: float64
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 文本、标签分离
train_text = train_set['all'].values
train_label = train_set['label']
valid_text = valid_set['all'].values
valid_label = valid_set['label']

# 标签还要做以下处理,添加一个 'cats' key,'cats' 也是内置的关键字
train_label = [{"cats": label} for label in train_label]
valid_label = [{"cats": label} for label in valid_label]

# 训练数据打包,再转为list
train_data = list(zip(train_text, train_label)) 

test_text = (test['subject']+test['email']).values
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
print(train_label[0])
  • 1

输出:

{'cats': {'spam': False, 'ham': True}}
  • 1

3. 建模

  • 创建模型,管道
nlp = spacy.blank('en') # 建立空白的英语模型
email_cat = nlp.create_pipe('textcat', 
#                             config=
#                             {
#     "exclusive_classes": True, # 排他的,二分类
#     "architecture": "bow"
#                             }
                           )
# 参数 'textcat' 不能随便写,是接口内置的 字符串
# 上面的 config 不要也可以,没找到文档说明,该怎么配置
help(nlp.create_pipe)
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 添加管道
nlp.add_pipe(email_cat)
  • 1
  • 添加标签
# 注意顺序,ham是 0, spam 是 1
email_cat.add_label('ham')
email_cat.add_label('spam')
  • 1
  • 2
  • 3
  • 训练
from spacy.util import minibatch
import random
def train(model, train, optimizer, batch_size=8):
    loss = {}
    random.seed(1)
    random.shuffle(train) # 随机打乱
    batches = minibatch(train, size=batch_size) # 数据分批
    for batch in batches:
        text, label = zip(*batch)
        model.update(text, label, sgd=optimizer, losses=loss)
    return loss
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 预测
def predict(model, text):
    docs = [model.tokenizer(txt) for txt in text] # 先把文本令牌化
    emailpred = model.get_pipe('textcat')
    score, _ = emailpred.predict(docs)
    pred_label = score.argmax(axis=1)
    return pred_label
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 评估
def evaluate(model, text, label):
    pred = predict(model, text)
    true_class = [int(lab['cats']['spam']) for lab in label]
    correct = (pred == true_class)
    acc = sum(correct)/len(correct) # 准确率
    return acc
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6

4. 训练

n = 20
opt = nlp.begin_training() # 定义优化器
for i in range(n):
    loss = train(nlp, train_data, opt)
    acc = evaluate(nlp, valid_text, valid_label)
    print(f"Loss: {loss['textcat']:.3f} \t Accuracy: {acc:.3f}")
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6

输出:

Loss: 1.132 	 Accuracy: 0.941
Loss: 0.283 	 Accuracy: 0.988
Loss: 0.121 	 Accuracy: 0.993
Loss: 0.137 	 Accuracy: 0.993
Loss: 0.094 	 Accuracy: 0.982
Loss: 0.069 	 Accuracy: 0.995
Loss: 0.060 	 Accuracy: 0.990
Loss: 0.010 	 Accuracy: 0.992
Loss: 0.004 	 Accuracy: 0.992
Loss: 0.004 	 Accuracy: 0.992
Loss: 0.004 	 Accuracy: 0.992
Loss: 0.004 	 Accuracy: 0.992
Loss: 0.004 	 Accuracy: 0.992
Loss: 0.004 	 Accuracy: 0.991
Loss: 0.004 	 Accuracy: 0.991
Loss: 0.308 	 Accuracy: 0.981
Loss: 0.158 	 Accuracy: 0.987
Loss: 0.014 	 Accuracy: 0.990
Loss: 0.007 	 Accuracy: 0.990
Loss: 0.043 	 Accuracy: 0.990
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20

5. 预测

pred = predict(nlp, test_text)
  • 1

  • 写入提交文件
id = test['id']
output = pd.DataFrame({'id':id, 'Class':pred})
output.to_csv("submission.csv",  index=False)
  • 1
  • 2
  • 3

模型在测试集的准确率是99%以上!


我的CSDN博客地址 https://michael.blog.csdn.net/

长按或扫码关注我的公众号(Michael阿明),一起加油、一起学习进步!
Michael阿明

声明:本文内容由网友自发贡献,不代表【wpsshop博客】立场,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:https://www.wpsshop.cn/w/我家自动化/article/detail/365104
推荐阅读
相关标签
  

闽ICP备14008679号