赞
踩
trainDF[‘char_count’] = trainDF[‘text’].apply(len)
trainDF[‘word_count’] = trainDF[‘text’].apply(lambda x: len(x.split()))
trainDF[‘word_density’] = trainDF[‘char_count’] / (trainDF[‘word_count’]+1)
trainDF[‘punctuation_count’] = trainDF[‘text’].apply(lambda x: len(“”.join(_ for _ in x if _ in string.punctuation)))
trainDF[‘title_word_count’] = trainDF[‘text’].apply(lambda x: len([wrd for wrd in x.split() if wrd.istitle()]))
trainDF[‘upper_case_word_count’] = trainDF[‘text’].apply(lambda x: len([wrd for wrd in x.split() if wrd.isupper()]))
pos_family = {
‘noun’ : [‘NN’,‘NNS’,‘NNP’,‘NNPS’],
‘pron’ : [‘PRP’,‘PRP ′ , ′ W P ′ , ′ W P ','WP','WP ′,′WP′,′WP’],
‘verb’ : [‘VB’,‘VBD’,‘VBG’,‘VBN’,‘VBP’,‘VBZ’],
‘adj’ : [‘JJ’,‘JJR’,‘JJS’],
‘adv’ : [‘RB’,‘RBR’,‘RBS’,‘WRB’]
}
#检查和获得特定句子中的单词的词性标签数量
def check_pos_tag(x, flag):
cnt = 0
try:
wiki = textblob.TextBlob(x)
for tup in wiki.tags:
ppo = list(tup)[1]
if ppo in pos_family[flag]:
cnt += 1
except:
pass
return cnt
trainDF[‘noun_count’] = trainDF[‘text’].apply(lambda x: check_pos_tag(x, ‘noun’))
trainDF[‘verb_count’] = trainDF[‘text’].apply(lambda x: check_pos_tag(x, ‘verb’))
trainDF[‘adj_count’] = trainDF[‘text’].apply(lambda x: check_pos_tag(x, ‘adj’))
trainDF[‘adv_count’] = trainDF[‘text’].apply(lambda x: check_pos_tag(x, ‘adv’))
trainDF[‘pron_count’] = trainDF[‘text’].apply(lambda x: check_pos_tag(x, ‘pron’))
2.5 主题模型作为特征
主题模型是从包含重要信息的文档集中识别词组(主题)的技术,我已经使用LDA生成主题模型特征。LDA是一个从固定数量的主题开始的迭代模型,每一个主题代表了词语的分布,每一个文档表示了主题的分布。虽然分词本身没有意义,但是由主题表达出的词语的概率分布可以传达文档思想。如果想了解更多主题模型,请访问:
>
> [https://www.analyticsvidhya.com/blog/2016/08/beginners-guide-to-topi
Copyright © 2003-2013 www.wpsshop.cn 版权所有,并保留所有权利。