当前位置:   article > 正文

09- 京东客户购买意向预测 (机器学习集成算法) (项目九) *_京东行为购买预测分析

京东行为购买预测分析

项目难点

  • 根据用户在网站的操作(浏览, 加购, 删除, 购买, 收藏, 点击), 预测用户是否购买产品 。
  • 主要使用 Xgboost 建模
  • pd.get_dummies 相当于onehot编码,常用与把离散的类别信息转化为onehot编码形式。
  • sklearn.preprocessing.LabelEncoder():标准化标签,将标签值统一转换成range(标签值个数-1)范围内
  1. from sklearn import preprocessing
  2. le =preprocessing.LabelEncoder()
  3. le.fit(["paris", "paris", "tokyo", "amsterdam"])
  4. print('标签个数:%s'% le.classes_) # 标签个数:['amsterdam' 'paris' 'tokyo']
  5. print('标签值标准化:%s' % le.transform(["tokyo", "tokyo", "paris"])) # [2 2 1]
  6. print('标准化标签值反转:%s'%le.inverse_transform([2,2,1])) #['tokyo' 'tokyo' 'paris']
  • GC.Collect() 其功能就是强制对所有代进行垃圾回收
  • data.groupby()   # 分组聚合
  • 合并数据: pd.merge()
  • 统计操作次数:  counter(behave)
  1. from collections import Counter
  2. # 功能函数:对每一个user 分组数据进行统计
  3. behavior_type = group.type.astype(int) # 用户行为类别
  4. type_cnt = Counter(behavior_type) # 1、浏览 2、加购 3、删除 4、购买 5、收藏 6、点击
  5. group['browse_num'] = type_cnt[1]
  • 分批读取数据:  chunk = reader.get_chunk(chunk_size)[['user_id','type']]
  • 数据拼接:  df_ac = pd.concat(chunks, ignore_index = True)
  • 抛弃重复数据:  df_ac = df_ac.drop_duplicates('user_id')
  • 基于特定行合并数据:  pd.merge(user_base, user_behavior, on = ['user_id'], how = 'left')
  • 抽取数据中的特定字段:
  1. df_usr = pd.read_csv(USER_FILE, header = 0)
  2. df_usr = df_usr[['user_iddf_usrre', 'sex','user_lv_cd']]
  • data.info(): 查看数据的基本信息:(行数、列数、列索引、列类型、列非空值个数、内存占用)

  • data.describe(): 查看数据的统计信息:(总数,均值,标准差,最小值,最大值,分位数等)

  • 将列表内部的表格合并: df_ac = pd.concat(chunks, ignore_index = True)

  • data.groupby([‘分组字段’]):  对DataFrame进行分组(可单类分组,可多类分组)

  • xgboost 建模

  1. dtrain = xgb.DMatrix(X_train, label=y_train)
  2. dvalid = xgb.DMatrix(X_val, label=y_val)
  3. param = {'n_estimators': 4000, 'max_depth': 3, 'min_child_weight': 5, 'gamma': 0.1,
  4. 'subsample': 0.9,'colsample_bytree': 0.8, 'scale_pos_weight':10, 'eta': 0.1,
  5. 'objective': 'binary:logistic','eval_metric':['auc','error']}
  6. num_round = param['n_estimators']
  7. evallist = [(dtrain, 'train'), (dvalid, 'eval')]
  8. bst = xgb.train(param, dtrain, num_round, evallist, early_stopping_rounds=10)
  9. bst.save_model('bst.model')


项目简介

根据已知用户购买、浏览数据,对用户未来的购买意向,进行预测。 提前知道用户购买意向,可以大大提升,电商平台对物流的掌控力度,提前备货。 对消费者也是一定好处,商品购买意向预测,相当于商品找消费者,实现个性化服务,消费者购物体验会大大提升~

主要分为以下几步:

  • 数据加载
  • 数据探索
  • 特征工程
  • 算法筛选
  • 模型评估

一  数据清洗

1.1  加载数据 (小批量数据测试)

  1. import pandas as pd
  2. df_user = pd.read_csv('./data/JData_User.csv')
  3. display(df_user.head())
  4. df_month3 = pd.read_csv('./data/JData_Action_201603.csv')
  5. display(df_month3.head())

  

1.2  垃圾回收

  1. import gc # GC.Collect()其功能就是强制对所有代进行垃圾回收
  2. del df_user
  3. del df_month3
  4. gc.collect()

1.3  数据检查

测试合并数据: pd.merge()

  1. import pandas as pd
  2. df1 = pd.DataFrame({'sku':['a','b','c','d'],'data':[1,1,2,3]})
  3. df2 = pd.DataFrame({'sku':['a','b','f'],'time':['+','-','*']})
  4. df3 = pd.DataFrame({'sku':['a','b','d']})
  5. df4 = pd.DataFrame({'sku':['a','b','c','d']})
  6. display(pd.merge(df1,df2))
  7. display(pd.merge(df1,df3))
  8. display(pd.merge(df1,df4))

        

  • 测试结果: 相同sku 数据直接合并为一个
  1. def user_action_id_check():
  2. df_user = pd.read_csv('G:/01-project/07-机器学习/08-京东购买意向预测/data/JData_User.csv')
  3. df_user = df_user.loc[:,'user_id'].to_frame()
  4. df_month2 = pd.read_csv('G:/01-project/07-机器学习/08-京东购买意向预测/data/JData_Action_201602.csv')
  5. print('Is action of Feb. from User file?',len(df_month2) == len(pd.merge(df_user,df_month2)))
  6. df_month3 = pd.read_csv('G:/01-project/07-机器学习/08-京东购买意向预测/data/JData_Action_201603.csv')
  7. print('Is action of Feb. from User file?',len(df_month3) == len(pd.merge(df_user,df_month3)))
  8. df_month4 = pd.read_csv('G:/01-project/07-机器学习/08-京东购买意向预测/data/JData_Action_201604.csv')
  9. print('Is action of Feb. from User file?',len(df_month4) == len(pd.merge(df_user,df_month4)))
  10. del df_user,df_month2,df_month3,df_month4
  11. gc.collect()
  12. user_action_id_check()
  13. '''Is action of Feb. from User file? True
  14. Is action of Feb. from User file? True
  15. Is action of Feb. from User file? True'''
  •  数据长度一致, 表示月份的user_id 都包含在了 user 的数据集中, 无编外人员.

1.4  检查每个文件的重复数据

  1. def deduplicate(filepath, filename, newpath):
  2. df_file = pd.read_csv(filepath)
  3. before = df_file.shape[0]
  4. df_file.drop_duplicates(inplace =True)
  5. after = df_file.shape[0]
  6. n_dup = before -after
  7. if n_dup != 0:
  8. print('No. of duplicate records for ' + filename + 'is:' + str(n_dup))
  9. df_file.to_csv(newpath, index = None)
  10. else:
  11. print('No duplicate records in' + filename)
  12. del df_file
  13. gc.collect()
  1. deduplicate('G:/01-project/07-机器学习/08-京东购买意向预测/data/JData_Action_201602.csv','Feb. action',
  2. 'G:/01-project/07-机器学习/08-京东购买意向预测/data/JData_Action_201602_dedup.csv')
  3. deduplicate('G:/01-project/07-机器学习/08-京东购买意向预测/data/JData_Action_201603.csv','Mar. action',
  4. 'G:/01-project/07-机器学习/08-京东购买意向预测/data/JData_Action_201603_dedup.csv')
  5. deduplicate('G:/01-project/07-机器学习/08-京东购买意向预测/data/JData_Action_201604.csv','Apr. action',
  6. 'G:/01-project/07-机器学习/08-京东购买意向预测/data/JData_Action_201604_dedup.csv')
  7. deduplicate('G:/01-project/07-机器学习/08-京东购买意向预测/data/JData_Comment.csv','Comment',
  8. 'G:/01-project/07-机器学习/08-京东购买意向预测/data/JData_Comment_dedup.csv')
  9. deduplicate('G:/01-project/07-机器学习/08-京东购买意向预测/data/JData_Product.csv','Product',
  10. 'G:/01-project/07-机器学习/08-京东购买意向预测/data/JData_Product_dedup.csv')
  11. deduplicate('G:/01-project/07-机器学习/08-京东购买意向预测/data/JData_User.csv','User',
  12. 'G:/01-project/07-机器学习/08-京东购买意向预测/data/JData_User_dedup.csv')

        

  •  2, 3, 4 月的数据中存在重复数据

1.5  重复数据分析

  1. df_month3 = pd.read_csv('G:/01-project/07-机器学习/08-京东购买意向预测/data/JData_Action_201602.csv')
  2. IsDuplicated = df_month3.duplicated()
  3. df_d = df_month3[IsDuplicated]
  4. # 发现重复数据大多数都是由于浏览(1),或者点击(6)产生
  5. display(df_d.groupby('type').count())
  6. # display(df_d.groupbyoupby('type'),count())
  7. del df_month3,df_d
  8. gc.collect()

        

  • 发现重复数据大多数都是由于浏览(1),或者点击(6)产生

1.6  构建user_table

  1. # 定义文件名
  2. ACTION_201602_FILE ='G:/01-project/07-机器学习/08-京东购买意向预测/data/JData_Action_201602.csv'
  3. ACTION_201603_FILE ='G:/01-project/07-机器学习/08-京东购买意向预测/data/JData_Action_201603.csv'
  4. ACTION_201604_FILE ='G:/01-project/07-机器学习/08-京东购买意向预测/data/JData_Action_201604.csv'
  5. COMMENT_FILE = 'G:/01-project/07-机器学习/08-京东购买意向预测/data/JData_Comment.csv'
  6. PRODUCT_FILE = 'G:/01-project/07-机器学习/08-京东购买意向预测/data/JData_Product.csv'
  7. USER_FILE = 'G:/01-project/07-机器学习/08-京东购买意向预测/data/JData_User.csv'
  8. USER_TABLE_FILE = 'G:/01-project/07-机器学习/08-京东购买意向预测/data/User_table.csv'

1.6.1  定义函数统计用户操作频次

  1. # 导入相应的包
  2. import pandas as pd
  3. import numpy as np
  4. from collections import Counter
  5. # 功能函数:对每一个user 分组数据进行统计
  6. def add_type_count(group):
  7. behavior_type = group.type.astype(int)
  8. # 用户行为类别
  9. type_cnt = Counter(behavior_type)
  10. # 1、浏览 2、加购 3、删除
  11. # 4、购买 5、收藏 6、点击
  12. group['browse_num'] = type_cnt[1]
  13. group['addcart_num'] = type_cnt[2]
  14. group['delcart_num'] = type_cnt[3]
  15. group['buy_num'] = type_cnt[4]
  16. group['favor_num'] = type_cnt[5]
  17. group['click_num'] = type_cnt[6]
  18. return group[['user_id','browse_num','addcart_num',
  19. 'delcart_num','buy_num','favor_num',
  20. 'click_num']]

1.6.2  用户行为数据分块读取

  1. # 对action数据进行统计
  2. # 根据自己调节chunk_size大小
  3. def get_from_action_data(fname, chunk_size = 50000):
  4. reader = pd.read_csv(fname, header =0, iterator= True)
  5. chunks =[] # 块
  6. loop = True # 循环
  7. while loop:
  8. try:
  9. # 只读取user_id和 type两个字段
  10. chunk = reader.get_chunk(chunk_size)[['user_id','type']]
  11. chunks.append(chunk)
  12. except StopIteration:
  13. loop = False
  14. print('Iteration is stopped') # Iteration 迭代
  15. # 将块拼接为pandas dataframe格式
  16. df_ac = pd.concat(chunks,ignore_index = True)
  17. # 按user_id 分组,对每组进行统计
  18. df_ac = df_ac.groupby(['user_id'], as_index = False).apply(add_type_count)
  19. # 将重复行丢弃
  20. df_ac = df_ac.drop_duplicates('user_id')
  21. return df_ac

1.6.3  2月用户数据查看

  1. df_ac = get_from_action_data(fname = ACTION_201602_FILE,
  2. chunk_size = 50000)
  3. display(df_ac.head(10))
  4. del df_ac
  5. gc.collect()

         

1.6.4  定义函数聚合全部数据

  1. # 将各个action 数据的统计量进行聚合
  2. def merge_action_data():
  3. df_ac = []
  4. df_ac.append(get_from_action_data(fname =ACTION_201602_FILE))
  5. df_ac.append(get_from_action_data(fname =ACTION_201603_FILE))
  6. df_ac.append(get_from_action_data(fname =ACTION_201604_FILE))
  7. df_ac = pd.concat(df_ac, ignore_index= True)
  8. # 用户在不同的action 表中统计量求和
  9. df_ac = df_ac.groupby(['user_id'], as_index= False).sum()
  10. # 构造转化率字段
  11. df_ac['buy_addcart_ratio'] = df_ac['buy_num'] / df_ac['addcart_num']
  12. df_ac['buy_browse_ratio'] = df_ac['buy_num'] / df_ac['browse_num']
  13. df_ac['buy_click_ratio'] = df_ac['buy_num'] /df_ac['click_num']
  14. df_ac['buy_favor_ratio'] = df_ac['buy_num'] /df_ac['favor_num']
  15. # 将大于1的转化率字段置为1(100%)
  16. df_ac.loc[df_ac['buy_addcart_ratio'] > 1., 'buy_addcart_ratio'] = 1
  17. df_ac.loc[df_ac['buy_browse_ratio'] > 1., 'buy_browse_ratio'] = 1
  18. df_ac.loc[df_ac['buy_click_ratio'] > 1., 'buy_click_ratio'] = 1
  19. df_ac.loc[df_ac['buy_favor_ratio'] > 1., 'buy_favor_ratio'] = 1
  20. return df_ac

1.6.5  聚合全部数据

  1. user_behavior =merge_action_data()
  2. user_behavior.head()

        

从JData_User 表中抽取需要的字段

  1. def get_from_jdata_user():
  2. df_usr = pd.read_csv(USER_FILE, header = 0)
  3. df_usr = df_usr[['user_iddf_usrre', 'sex','user_lv_cd']]
  4. return df_usr
  5. user_base =get_from_jdata_user()
  6. user_base.head()

        

 1.6.6  数据存入 (合并数据)

  1. # 连成一张表,类似于SQL的左连接(left join)
  2. user_table = pd.merge(user_base, user_behavior, on = ['user_id'], how = 'left')
  3. # 保存为user_table.csv
  4. user_table.to_csv(USER_TABLE_FILE,index = False)
  5. display(user_table.head(10))
  6. del user_table, user_behavior, user_base
  7. gc.collect()

        

  •  合并用户数据和在网站的操作数据

1.7  数据清洗

1.7.1  用户整体性查看

  1. import pandas as pd
  2. df_user = pd.read_csv(USER_TABLE_FILE, header =0)
  3. # pd.options.display.float_format = '{:,0.3f}'.format # 输出格式设置,保留三位小数
  4. pd.options.display.float_format = '{:,.3f}'.format
  5. df_user.shape # (105321, 14)
  6. df_user.describe()

        

1.7.2  删除没有age,sex字段的用户

df_user[df_user['age'].isnull()]

        

  1. delete_list = df_user[df_user['age'].isnull()].index
  2. df_user.drop(delete_list,axis= 0,inplace= True)
  3. df_user.shape # (105318, 14)

1.7.3  删除无交互记录的用户

  1. cond = (df_user['browse_num'].isnull())& (df_user['addcart_num'].isnull())& (df_user['delcart_num'].isnull()) &(df_user['buy_num'].isnull())& (df_user['favor_num'].isnull())&(df_user['click_num'].isnull())
  2. df_naction = df_user[cond]
  3. display(df_naction.shape) # (105177, 14)
  4. df_user.drop(df_naction.index, axis = 0, inplace = True)
  5. df_user.shape #(141, 14)

1.7.4  统计并删除无购买记录的用户

  1. # 统计无购买记录的用户
  2. df_bzero = df_user[df_user['buy_num'] == 0]
  3. # 输出购买数为0 的总记录数
  4. print(len(df_bzero))
  5. # 删除无购买记录的用户
  6. df_user = df_user[df_user['buy_num'] !=0]
  7. df_user.describe()

        

  •  删除无效数据

1.7.5  删除爬虫及惰性用户

  • 浏览购买转换比和点击购买转换比小于0.0005认定为惰性用户,进行删除,减少对预测结果的影响
  1. bindex = df_user[df_user['buy_browse_ratio'] < 0.0005].index
  2. print(len(bindex)) # 90
  3. df_user.drop(bindex ,axis= 0,inplace= True)
  1. bindex = df_user[df_user['buy_click_ratio'] < 0.0005].index
  2. print(len(bindex)) # 323
  3. df_user.drop(bindex ,axis= 0,inplace= True)

1.7.6  数据存入

  1. df_user.to_csv('G:/01-project/07-机器学习/08-京东购买意向预测/data/User_table_cleaned.csv', index =False)
  2. df_user.describe()

        

  1. del df_user
  2. gc.collect() # 垃圾回收,清空

二  数据探索

2.1  导包定义文件变量

  1. # 导入相关包
  2. %matplotlib inline
  3. # 绘图包
  4. import matplotlib
  5. import matplotlib.pyplot as plt
  6. import numpy as np
  7. import pandas as pd
  8. # 定义文件名
  9. ACTION_201602_FILE ='G:/01-project/07-机器学习/08-京东购买意向预测/data/JData_Action_201602.csv'
  10. ACTION_201603_FILE ='G:/01-project/07-机器学习/08-京东购买意向预测/data/JData_Action_201603.csv'
  11. ACTION_201604_FILE ='G:/01-project/07-机器学习/08-京东购买意向预测/data/JData_Action_201604.csv'
  12. COMMENT_FILE = 'G:/01-project/07-机器学习/08-京东购买意向预测/data/JData_Comment.csv'
  13. PRODUCT_FILE = 'G:/01-project/07-机器学习/08-京东购买意向预测/data/JData_Product.csv'
  14. USER_FILE = 'G:/01-project/07-机器学习/08-京东购买意向预测/data/JData_User.csv'
  15. USER_TABLE_FILE = 'G:/01-project/07-机器学习/08-京东购买意向预测/data/User_table.csv'
  16. USER_TABLE_CLEANED = 'G:/01-project/07-机器学习/08-京东购买意向预测/data/User_table_cleaned.csv'

2.2  周一到周天各天的购买情况

2.2.1  定义提取购买数据函数

  1. # 提取购买的行为数据
  2. def get_from_action_data(fname,chunk_size = 50000):
  3. reader = pd.read_csv(fname,header = 0, iterator= True)
  4. chunks =[]
  5. loop = True
  6. while loop:
  7. try:
  8. chunk = reader.get_chunk(chunk_size)[['user_id','sku_id','type','time']]
  9. chunks.append(chunk)
  10. except StopIteration:
  11. loop = False
  12. print('Iteration is stopped')
  13. df_ac = pd.concat(chunks, ignore_index = True)
  14. # type = 4,购买
  15. df_ac = df_ac[df_ac['type'] ==4]
  16. return df_ac[['user_id', 'sku_id', 'time']]

2.2.2  提起全部购买数据

  1. df_ac = []
  2. df_ac.append(get_from_action_data(fname= ACTION_201602_FILE))
  3. df_ac.append(get_from_action_data(fname= ACTION_201603_FILE))
  4. df_ac.append(get_from_action_data(fname= ACTION_201604_FILE))
  5. df_ac =pd.concat(df_ac, ignore_index= True)
  6. display(df_ac.head(), df_ac.shape)

        

 2.2.3  日期转换

  1. # 将time 字段转换为 datetime 类型
  2. df_ac['time'] = pd.to_datetime(df_ac['time'])
  3. # 使用 lambda匿名函数将时间 time转换为星期(周一为1,周日为7)
  4. df_ac['time'] = df_ac['time'].apply(lambda x :x.weekday()+1)
  5. df_ac.head()

        

 2.2.4  分类聚合数据汇总

  1. # 周一到周日每天购买用户个数
  2. df_user = df_ac.groupby('time')['user_id'].nunique()
  3. df_user = df_user.to_frame().reset_index()
  4. df_user.columns = ['weekday', 'user_num']
  5. df_user

        

  1. # 周一到周天每天购买商品个数
  2. df_item = df_ac.groupby('time')['sku_id'].nunique()
  3. df_item =df_item.to_frame().reset_index()
  4. df_item.columns = ['weekday', 'item_num']
  5. df_item

        

  1. # 周一到周日每天购买记录个数
  2. df_ui = df_ac.groupby('time', as_index=False).size()
  3. # df_ui = df_ui.to_frame().reset_index()
  4. df_ui.columns = ['weekday', 'user_item_num']
  5. df_ui

        

  •  查看按周为维度的购买数据统计

2.2.5  周购买情况数据可视化

  1. # 条形宽度
  2. bar_width = 0.2
  3. # 透明度
  4. opacity = 0.4
  5. plt.figure(figsize=(9,6))
  6. plt.bar(df_user['weekday'], df_user['user_num'], bar_width,
  7. alpha=opacity, color='c', label='user')
  8. plt.bar(df_item['weekday']+bar_width, df_item['item_num'],
  9. bar_width, alpha=opacity, color='g', label='item')
  10. plt.bar(df_ui['weekday']+bar_width*2, df_ui['user_item_num'],
  11. bar_width, alpha=opacity, color='m', label='user_item')
  12. plt.xlabel('weekday')
  13. plt.ylabel('number')
  14. plt.title('A Week Purchase Table')
  15. plt.xticks(df_user['weekday'] + bar_width * 3 / 2., (1,2,3,4,5,6,7))
  16. plt.tight_layout() # 紧凑布局
  17. plt.legend(prop={'size':10})
  18. plt.savefig('./10-周购买情况数据可视化.png',dpi = 200)

        

  • 可视化购买数据

2.3  一个月中各天的购买量

2.3.1  2016年2月

  1. df_ac = get_from_action_data(fname = ACTION_201602_FILE)
  2. df_ac['time'] = pd.to_datetime(df_ac['time']).apply(lambda x:x.day)
  3. df_ac.head()

        

  1. # 每天购买用户个数
  2. df_user = df_ac.groupby('time')['user_id'].nunique()
  3. df_user = df_user.to_frame().reset_index()
  4. df_user.columns = ['day', 'user_num']
  5. # 每天购买用商品数
  6. df_item = df_ac.groupby('time')['sku_id'].nunique()
  7. df_item = df_item.to_frame().reset_index()
  8. df_item.columns = ['day', 'item_num']
  9. # 每天购买记录个数
  10. df_ui = df_ac.groupby('time', as_index=False).size()
  11. df_ui.columns = ['day', 'user_item_num']
  12. df_ui

        

  1. # 条形宽度
  2. bar_width = 0.2
  3. # 透明度
  4. opacity = 0.4
  5. # 天数
  6. day_range = range(1,len(df_user['day']) +1)
  7. # 设置图片大小
  8. plt.figure(figsize= (14,10))
  9. plt.bar(df_user['day'], df_user['user_num'], bar_width,
  10. alpha = opacity, color = 'c', label = 'user')
  11. plt.bar(df_item['day'] + bar_width, df_item['item_num'],
  12. bar_width, alpha =opacity, color = 'g', label ='item')
  13. plt.bar(df_ui['day'] +bar_width *2, df_ui['user_item_num'],
  14. bar_width, alpha = opacity, color = 'm', label ='user_item')
  15. plt.xlabel('day')
  16. plt.ylabel('number')
  17. plt.title('February Purchase Table')
  18. plt.xticks(df_user['day'] + bar_width *3 /2, day_range)
  19. plt.tight_layout()
  20. plt.legend(prop = {'size' :9})
  21. plt.savefig('./11-2月购买情况可视化.png',dpi =200)

        

  • 二月开始几天销量较差

2.3.2  2016年3月

  1. df_ac = get_from_action_data(fname=ACTION_201603_FILE)
  2. # 将time字段转换为datetime类型并使用lambda匿名函数将时间time转换为天
  3. df_ac['time'] = pd.to_datetime(df_ac['time']).apply(lambda x: x.day)
  4. df_user = df_ac.groupby('time')['user_id'].nunique()
  5. df_user = df_user.to_frame().reset_index()
  6. df_user.columns = ['day', 'user_num']
  7. display(df_user)
  8. df_item = df_ac.groupby('time')['sku_id'].nunique()
  9. df_item = df_item.to_frame().reset_index()
  10. df_item.columns = ['day', 'item_num']
  11. display(df_item)
  12. df_ui = df_ac.groupby('time', as_index=False).size()
  13. df_ui.columns = ['day', 'user_item_num']
  14. display(df_ui)
  1. # 条形宽度
  2. bar_width = 0.2
  3. # 透明度
  4. opacity = 0.4
  5. # 天数
  6. day_range = range(1,len(df_user['day']) + 1, 1)
  7. # 设置图片大小
  8. plt.figure(figsize=(14,10))
  9. plt.bar(df_user['day'], df_user['user_num'], bar_width,
  10. alpha=opacity, color='c', label='user')
  11. plt.bar(df_item['day']+bar_width, df_item['item_num'],
  12. bar_width, alpha=opacity, color='g', label='item')
  13. plt.bar(df_ui['day']+bar_width*2, df_ui['user_item_num'],
  14. bar_width, alpha=opacity, color='m', label='user_item')
  15. plt.xlabel('day')
  16. plt.ylabel('number')
  17. plt.title('March Purchase Table')
  18. plt.xticks(df_user['day'] + bar_width * 3 / 2., day_range)
  19. plt.tight_layout()
  20. plt.legend(prop={'size':9})
  21. plt.savefig('./12-3月购买情况可视化.png',dpi = 200)

        

  • 3月份的14, 15号的销售数据较好

2.3.3  2016年4月

  1. df_ac = get_from_action_data(fname = ACTION_201604_FILE)
  2. # 将time 字段转换为 datatime 类型并使用 lambda 匿名函数将时间time 转换为天
  3. df_ac['time'] = pd.to_datetime(df_ac['time']).apply(lambda x :x.day)
  4. df_user = df_ac.groupby('time')['user_id'].nunique()
  5. df_user = df_user.to_frame().reset_index()
  6. df_user.columns = ['day', 'user_num']
  7. df_item = df_ac.groupby('time')['sku_id'].nunique()
  8. df_item = df_item.to_frame().reset_index()
  9. df_item.columns = ['day', 'item_num']
  10. df_ui = df_ac.groupby('time', as_index= False).size()
  11. df_ui.columns = ['day', 'user_item_num']
  1. bar_width = 0.2
  2. opacity = 0.4
  3. day_range = range(1, len(df_user['day']) +1, 1)
  4. plt.figure(figsize= (14, 10))
  5. plt.bar(df_user['day'], df_user['user_num'], bar_width,
  6. alpha = opacity, color = 'c', label = 'user')
  7. plt.bar(df_item['day'] + bar_width, df_item['item_num'],
  8. bar_width ,alpha = opacity, color = 'g', label = 'item')
  9. plt.bar(df_ui['day'] + bar_width *2, df_ui['user_item_num'],
  10. bar_width, alpha = opacity, color = 'm', label = 'user_item')
  11. plt.xlabel('day')
  12. plt.ylabel('number')
  13. plt.title('April Purchase Table')
  14. plt.xticks(df_user['day'] + bar_width * 3 /2, day_range)
  15. plt.tight_layout()
  16. plt.legend(prop = {'size':9})
  17. plt.savefig('./14-4月购买情况可视化.png', dpi = 200)

        

2.4  周一到周日各商品类别销售情况

2.4.1  定义函数提取行为记录中商品类别数据

  1. # 从行为记录中提取商品类别数据
  2. def get_from_action_data(fname,chunk_size = 50000):
  3. reader = pd.read_csv(fname,header = 0, iterator= True)
  4. chunks =[]
  5. loop = True
  6. while loop:
  7. try:
  8. chunk = reader.get_chunk(chunk_size)[['cate', 'brand', 'type','time']]
  9. chunks.append(chunk)
  10. except StopIteration:
  11. loop = False
  12. print('Iteration is stopped')
  13. df_ac = pd.concat(chunks, ignore_index = True)
  14. # type = 4,购买
  15. df_ac = df_ac[df_ac['type'] ==4]
  16. return df_ac[['cate', 'brand', 'type','time']]

2.4.2  提取全部商品类别数据

  1. df_ac = []
  2. df_ac.append(get_from_action_data(fname= ACTION_201602_FILE))
  3. df_ac.append(get_from_action_data(fname= ACTION_201603_FILE))
  4. df_ac.append(get_from_action_data(fname= ACTION_201604_FILE))
  5. df_ac = pd.concat(df_ac, ignore_index= True)
  6. # 将time 字段转换为datatime 类型
  7. df_ac['time'] = pd.to_datetime(df_ac['time'])
  8. # 使用lambda 匿名函数将时间 time 转换为星期(周一为1,周日为7)
  9. df_ac['time'] = df_ac['time'].apply(lambda x: x.weekday() + 1)
  10. # 观察有几个类别商品
  11. df_ac.groupby(df_ac['cate']).count()

        

 2.4.3  商品不同类别销量可视化

查看系统字体

  1. # 查找电脑上的中文字体
  2. from matplotlib.font_manager import FontManager
  3. fm = FontManager()
  4. [font.name for font in fm.ttflist]

        

  1. # 周一到周天每天购买商品类别数量统计
  2. plt.rcParams['font.family'] = 'STKaiti'
  3. plt.rcParams['font.size'] = 25
  4. df_product = df_ac['brand'].groupby([df_ac['time'], df_ac['cate']]).count()
  5. df_product = df_product.unstack()
  6. df_product.plot(kind = 'bar', figsize = (14, 10))
  7. plt.title(label= '不同商品周销量表',pad = 20)
  8. plt.savefig('./16 -不同商品周销量表.png',dpi = 200)

        

  1. df_product = df_ac['brand'].groupby([df_ac['time'], df_ac['cate']]).count()
  2. df_product.head(10)

        

2.5  每月商品销售情况

2.5.1  加载全部数据

  1. df_ac2 = get_from_action_data(fname= ACTION_201602_FILE)
  2. # 将time字段 转换为datatime
  3. df_ac2['time'] = pd.to_datetime(df_ac2['time']).apply(lambda x : x.day)
  4. df_ac3 = get_from_action_data(fname= ACTION_201603_FILE)
  5. # 将time字段 转换为datatime
  6. df_ac3['time'] = pd.to_datetime(df_ac3['time']).apply(lambda x : x.day)
  7. df_ac4 = get_from_action_data(fname= ACTION_201604_FILE)
  8. # 将time字段 转换为datatime
  9. df_ac4['time'] = pd.to_datetime(df_ac4['time']).apply(lambda x : x.day)

2.5.2  商品&每月销售数据按天分组聚合统计销量

  1. dc_cate2 = df_ac2[df_ac2['cate'] == 8]
  2. dc_cate2 = dc_cate2['brand'].groupby(dc_cate2['time']).count()
  3. display(dc_cate2.head())
  4. dc_cate2 = dc_cate2.to_frame().reset_index() # 重置索引
  5. display(dc_cate2.head())
  6. dc_cate2.columns = ['day', 'product_num']
  7. dc_cate3 =df_ac3[df_ac3['cate'] == 8]
  8. dc_cate3 = dc_cate3['brand'].groupby(dc_cate3['time']).count()
  9. dc_cate3 = dc_cate3.to_frame().reset_index()
  10. dc_cate3.columns = ['day', 'product_num']
  11. dc_cate4 =df_ac4[df_ac4['cate'] == 8]
  12. dc_cate4 = dc_cate4['brand'].groupby(dc_cate4['time']).count()
  13. dc_cate4 = dc_cate4.to_frame().reset_index()
  14. dc_cate4.columns = ['day', 'product_num']

        

 2.5.3  商品8 按天统计销量可视化

  1. # 条形宽度
  2. bar_width = 0.2
  3. # 透明度
  4. opacity = 0.4
  5. # 天数
  6. day_range = range(1, len(dc_cate3['day']) +1 ,1)
  7. plt.rcParams['font.family'] = 'STKaiti'
  8. plt.rcParams['font.size'] = 25
  9. # 设置图片大小
  10. plt.figure(figsize= (14,10))
  11. plt.bar(dc_cate2['day'], dc_cate2['product_num'], bar_width,
  12. alpha = opacity, color = 'c', label = 'February')
  13. plt.bar(dc_cate3['day'] + bar_width, dc_cate3['product_num'],
  14. bar_width, alpha = opacity, color = 'g', label = 'March')
  15. plt.bar(dc_cate4['day']+bar_width*2, dc_cate4['product_num'],
  16. bar_width, alpha=opacity, color='m', label='April')
  17. # plt.bar(dc_cate4['day'] + bar_width * 2, dc_cate4['product_num'],
  18. # bar_width, alpaha = opacity, color = 'm', label = 'April')
  19. plt.xlabel('day')
  20. plt.ylabel('number')
  21. plt.title('商品8 销量统计表', pad = 20)
  22. plt.xticks(dc_cate3['day'] + bar_width * 3 /2, day_range)
  23. plt.tight_layout()
  24. plt.legend(prop={'size':9})
  25. plt.savefig('./17-商品8每月按天统计销量可视化.png',dpi = 200)

        

2.6  查看特定用户对特定商品的消费轨迹

2.6.1  定义函数筛选用户--商品数据

  1. def spec_ui_action_data(fname, user_id, item_id, chunk_size = 100000):
  2. reader = pd.read_csv(fname, header = 0,iterator = True)
  3. chunks = []
  4. loop = True
  5. while loop:
  6. try:
  7. chunk = reader.get_chunk(chunk_size)[['user_id','sku_id','type','time']]
  8. chunks.append(chunk)
  9. except StopIteration:
  10. loop = False
  11. print('Iteration is stopped')
  12. df_ac = pd.concat(chunks, ignore_index = True)
  13. df_ac = df_ac[(df_ac['user_id'] == user_id) & (df_ac['sku_id'] == item_id)]
  14. return df_ac

2.6.2  筛选全部数据

  1. def explore_user_item_via_time():
  2. user_id = 266079
  3. item_id = 138778
  4. df_ac = []
  5. df_ac.append(spec_ui_action_data(ACTION_201602_FILE,user_id, item_id))
  6. df_ac.append(spec_ui_action_data(ACTION_201603_FILE,user_id, item_id))
  7. df_ac.append(spec_ui_action_data(ACTION_201604_FILE,user_id, item_id))
  8. df_ac = pd.concat(df_ac, ignore_index = False)
  9. print(df_ac.sort_values(by='time'))

2.6.3  进行用户和商品数据筛选

explore_user_item_via_time()

        

三  特征工程

3.1 用户特征

3.1.1  用户基本特征

  • 获取基本的用户特征,基于用户本身属性多为类别特征的特点,对age,sex,usr_lv_cd进行独热编码操作,对于用户注册时间暂时不处理

3.1.2  商品基本特征:

  • 根据商品文件获取基本的特征
  • 针对属性a1,a2,a3进行独热编码
  • 商品类别和品牌直接作为特征

3.1.3  评论特征:

  • 分时间段,
  • 对评论数进行独热编码

3.1.4  行为特征:

  • 分时间段
  • 对行为类别进行独热编码
  • 分别按照用户-类别行为分组和用户-类别-商品行为分组统计,然后计算
  • 用户对同类别下其他商品的行为计数
  • 不同时间累积的行为计数(3,5,7,10,15,21,30)

3.1.5  累积用户特征

  • 分时间段
  • 用户不同行为的
  • 购买转化率
  • 均值

3.1.6  用户近期行为特征:

  • 在上面针对用户进行累积特征提取的基础上,分别提取用户近一个月、近三天的特征,然后提取一个月内用户除去最近三天的行为占据一个月的行为的比重

3.1.7  用户对同类别下各种商品的行为:

  • 用户对各个类别的各项行为操作统计
  • 用户对各个类别操作行为统计占对所有类别操作行为统计的比重

3.1.8  累积商品特征

  • 分时间段
  • 针对商品的不同行为的
  • 购买转化率
  • 均值

3.1.9  类别特征

  • 分时间段下各个商品类别的
  • 购买转化率
  • 均值

3.2  数据加载

导包

  1. from datetime import datetime
  2. from datetime import timedelta
  3. import pandas as pd
  4. import numpy as np
  5. import gc

变量声明

  1. action_1_path ='G:/01-project/07-机器学习/08-京东购买意向预测/data/JData_Action_201602.csv'
  2. action_2_path ='G:/01-project/07-机器学习/08-京东购买意向预测/data/JData_Action_201603.csv'
  3. action_3_path ='G:/01-project/07-机器学习/08-京东购买意向预测/data/JData_Action_201604.csv'
  4. comment_path = 'G:/01-project/07-机器学习/08-京东购买意向预测/data/JData_Comment.csv'
  5. product_path = 'G:/01-project/07-机器学习/08-京东购买意向预测/data/JData_Product.csv'
  6. user_path = 'G:/01-project/07-机器学习/08-京东购买意向预测/data/JData_User.csv'

定义函数提取数据

  1. def get_actions_1():
  2. action = pd.read_csv(action_1_path)
  3. action[['user_id','sku_id','model_id','type','cate','brand']] = action[['user_id','sku_id','model_id','type','cate','brand']].astype('float32')
  4. return action
  5. def get_actions_2():
  6. action = pd.read_csv(action_2_path)
  7. action[['user_id','sku_id','model_id','type','cate','brand']] = action[['user_id','sku_id','model_id','type','cate','brand']].astype('float32')
  8. return action
  9. def get_actions_3():
  10. action = pd.read_csv(action_3_path)
  11. action[['user_id','sku_id','model_id','type','cate','brand']] = action[['user_id','sku_id','model_id','type','cate','brand']].astype('float32')
  12. return action
  13. # 读取并拼接所有行为记录文件
  14. def get_all_action():
  15. action_1 = get_actions_1()
  16. action_2 = get_actions_2()
  17. action_3 = get_actions_3()
  18. actions = pd.concat([action_1, action_2, action_3]) # type: pd.DataFrame
  19. return actions
  20. # 获取某个时间段的行为记录
  21. def get_actions(start_date, end_date, all_actions):
  22. """
  23. :param start_date:开始日期
  24. :param end_date:结束日期
  25. :return: actions: 返回数据
  26. """
  27. actions = all_actions[(all_actions.time >= start_date) & (all_actions.time < end_date)].copy()
  28. return actions

3.3  用户特征

3.1  用户基本特征

  • 获取基本的用户特征,基于用户本身属性多为类别特征的特点,对age,sex,usr_lv_cd进行独热编码操作,对于用户注册时间暂时不处理
  1. from sklearn import preprocessing
  2. le = preprocessing.LabelEncoder()
  3. le.fit(["paris", "paris", "tokyo", "amsterdam"])
  4. le.fit_transform(["paris", "paris", "tokyo", "amsterdam"])
  5. # array([1, 1, 2, 0], dtype=int64)
  1. user = pd.read_csv(user_path)
  2. display(user.head())
  3. le = preprocessing.LabelEncoder()
  4. age_df = le.fit_transform(user['age']) # numpy数组
  5. display(age_df[:5])
  6. del user,age_df
  7. gc.collect()

        

  1. from sklearn import preprocessing
  2. def get_basic_user_feat():
  3. # 针对年龄的中文字符问题处理,首先是读入的时候编码,填充空值,然后将其数值化,最后独热编码,此外对于sex也进行了数值类型转换
  4. user = pd.read_csv(user_path)
  5. user.dropna(axis=0, how='any',inplace=True)
  6. user['sex'] = user['sex'].astype(int)
  7. user['age'] = user['age'].astype(int)
  8. le = preprocessing.LabelEncoder()
  9. age_df = le.fit_transform(user['age'])
  10. age_df = pd.get_dummies(age_df, prefix='age') # 独热编码
  11. sex_df = pd.get_dummies(user['sex'], prefix='sex')
  12. user_lv_df = pd.get_dummies(user['user_lv_cd'], prefix='user_lv_cd')
  13. user = pd.concat([user['user_id'], age_df, sex_df, user_lv_df], axis=1)
  14. return user
  • pd.get_dummies 相当于onehot编码,常用与把离散的类别信息转化为onehot编码形式。

preprocessing.LabelEncoder():标准化标签,将标签值统一转换成range(len(data)-1)范围内.

  1. from sklearn import preprocessing
  2. le =preprocessing.LabelEncoder()
  3. le.fit(["paris", "paris", "tokyo", "amsterdam"])
  4. print('标签个数:%s'% le.classes_) # 标签个数:['amsterdam' 'paris' 'tokyo']
  5. print('标签值标准化:%s' % le.transform(["tokyo", "tokyo", "paris"])) # [2 2 1]
  6. print('标准化标签值反转:%s'%le.inverse_transform([2,2,1])) #['tokyo' 'tokyo' 'paris']
  1. user = get_basic_user_feat()
  2. display(user.head())
  3. del user
  4. gc.collect()

        

3.4  商品特征

3.4.1  商品基本特征

根据商品文件获取基本的特征,针对属性a1,a2,a3进行独热编码,商品类别和品牌直接作为特征 .

  1. def get_basic_product_feat():
  2. product = pd.read_csv(product_path)
  3. attr1_df = pd.get_dummies(product["a1"], prefix="a1")
  4. attr2_df = pd.get_dummies(product["a2"], prefix="a2")
  5. attr3_df = pd.get_dummies(product["a3"], prefix="a3")
  6. product = pd.concat([product[['sku_id', 'cate', 'brand']], attr1_df,
  7. attr2_df, attr3_df], axis=1)
  8. return product

3.5  评论特征

  • 分时间段
  • 对评论数进行独热编码
  1. def get_comments_product_feat(end_date):
  2. comments = pd.read_csv(comment_path)
  3. comments = comments[comments.dt <= end_date]# 某日期之前的评论数据
  4. df = pd.get_dummies(comments['comment_num'], prefix='comment_num')
  5. # 为了防止某个时间段不具备评论数为0的情况(测试集出现过这种情况)
  6. for i in range(0, 5):
  7. if 'comment_num_' + str(i) not in df.columns:
  8. df['comment_num_' + str(i)] = 0
  9. df = df[['comment_num_0', 'comment_num_1', 'comment_num_2', 'comment_num_3', 'comment_num_4']]
  10. comments = pd.concat([comments, df], axis=1) # type: pd.DataFrame
  11. comments = comments[['sku_id', 'has_bad_comment', 'bad_comment_rate','comment_num_0', 'comment_num_1',
  12. 'comment_num_2', 'comment_num_3', 'comment_num_4']]
  13. return comments

评论数据转换

  1. start_date = '2016-02-01'
  2. end_date = datetime.strptime(start_date, '%Y-%m-%d') + timedelta(days=3)
  3. end_date = end_date.strftime('%Y-%m-%d')
  4. display(start_date)
  5. comments = get_comments_product_feat(end_date)
  6. display(comments.head(),comments.shape)
  7. del comments
  8. gc.collect()

        

3.6  行为特征

3.6.1  用户-类别-商品计数统计

  • 分时间段
  • 对行为类别进行独热编码
  • 分别按照用户-类别行为分组和用户-类别-商品行为分组统计,然后计算
    • 用户对同类别下其他商品的行为计数
    • 针对用户对同类别下目标商品的行为计数与该时间段的行为均值作差

3.6.1.1  函数定义

  1. def get_action_feat(start_date, end_date, all_actions, day):
  2. actions = get_actions(start_date, end_date, all_actions)
  3. actions = actions[['user_id', 'sku_id', 'cate','type']]
  4. # 对行为类别进行独热编码
  5. prefix = 'action_before_%s' % day
  6. df = pd.get_dummies(actions['type'], prefix=prefix)
  7. actions = pd.concat([actions, df], axis=1)
  8. # 分组统计,用户-类别-商品,不同用户对不同类别下商品的行为计数
  9. actions = actions.groupby(['user_id', 'cate','sku_id'], as_index=False).sum()
  10. # 分组统计,用户-类别,不同用户对不同商品类别的行为计数
  11. user_cate = actions.groupby(['user_id','cate'], as_index=False).sum()
  12. del user_cate['sku_id']
  13. del user_cate['type']
  14. # 数据合并
  15. actions = pd.merge(actions, user_cate, how='left', on=['user_id','cate'])
  16. # 前述两种分组含有相同名称的不同行为的计数,系统会自动针对名称调整添加后缀,x,y
  17. # 所以这里作差统计的是同一类别下其他商品的行为计数
  18. # 不同时间累积的行为计数(3,5,7,10,15,21,30)表示时间间隔天数
  19. actions[prefix+'_1.0_y'] = actions[prefix+'_1.0_y'] - actions[prefix+'_1.0_x']
  20. actions[prefix+'_2.0_y'] = actions[prefix+'_2.0_y'] - actions[prefix+'_2.0_x']
  21. actions[prefix+'_3.0_y'] = actions[prefix+'_3.0_y'] - actions[prefix+'_3.0_x']
  22. actions[prefix+'_4.0_y'] = actions[prefix+'_4.0_y'] - actions[prefix+'_4.0_x']
  23. actions[prefix+'_5.0_y'] = actions[prefix+'_5.0_y'] - actions[prefix+'_5.0_x']
  24. actions[prefix+'_6.0_y'] = actions[prefix+'_6.0_y'] - actions[prefix+'_6.0_x']
  25. # 统计用户对不同类别下商品计数与该类别下商品行为计数均值(对时间)的差值
  26. actions[prefix+'minus_mean_1'] = actions[prefix+'_1.0_x'] - (actions[prefix+'_1.0_x']/day)
  27. actions[prefix+'minus_mean_2'] = actions[prefix+'_2.0_x'] - (actions[prefix+'_2.0_x']/day)
  28. actions[prefix+'minus_mean_3'] = actions[prefix+'_3.0_x'] - (actions[prefix+'_3.0_x']/day)
  29. actions[prefix+'minus_mean_4'] = actions[prefix+'_4.0_x'] - (actions[prefix+'_4.0_x']/day)
  30. actions[prefix+'minus_mean_5'] = actions[prefix+'_5.0_x'] - (actions[prefix+'_5.0_x']/day)
  31. actions[prefix+'minus_mean_6'] = actions[prefix+'_6.0_x'] - (actions[prefix+'_6.0_x']/day)
  32. del actions['type']
  33. return actions

3.6.1.2  代码解读

加载一定时间段内所有数据

  1. all_actions = get_all_action()
  2. start_date = '2016-02-01'
  3. end_date = datetime.strptime(start_date, '%Y-%m-%d') + timedelta(days=3)
  4. end_date = end_date.strftime('%Y-%m-%d')
  5. # 获取一定时间段内数据
  6. actions = get_actions(start_date, end_date, all_actions)
  7. display(actions.head(),actions.shape)
  8. del all_actions
  9. gc.collect()

        

分组统计用户-类别-商品, 不同用户对不同类别下商品的行为计数

  1. # 提取部分特征
  2. actions = actions[['user_id', 'sku_id', 'cate','type']]
  3. # 对行为类别进行独热编码
  4. df = pd.get_dummies(actions['type'], prefix='action_before_%s' %3)
  5. display(df.head())
  6. # 数据合并
  7. actions = pd.concat([actions, df], axis=1) # type: pd.DataFrame
  8. display(actions.head(),actions.shape)
  9. del df
  10. gc.collect()
  11. # 分组统计,用户-类别-商品,不同用户对不同类别下商品的行为计数
  12. actions = actions.groupby(['user_id', 'cate','sku_id'], as_index=False).sum()
  13. display(actions.head(), actions.shape)

简单代码演示groupby, data.groupby()   # 分组聚合

  1. import pandas as pd
  2. df = pd.DataFrame(data={'books':['bk1','bk1','bk1','bk2','bk2','bk3'],
  3. 'price': [12,12,12,15,15,17],
  4. 'num':[2,1,1,4,2,2]})
  5. display(df)
  6. display(df.groupby('books',as_index=True).sum())
  7. display(df.groupby('books',as_index=False).sum())

        

  • 将相同的类别特征合并为一组数据

分组统计用户-类别,不同用户对不同商品类别的行为计数

  1. # 分组统计,用户-类别,不同用户对不同商品类别的行为计数
  2. user_cate = actions.groupby(['user_id','cate'], as_index=False).sum()
  3. del user_cate['sku_id']
  4. del user_cate['type']
  5. display(user_cate.head(),user_cate.shape)
  6. actions = pd.merge(actions, user_cate, how='left', on=['user_id','cate'])
  7. del user_cate
  8. gc.collect()
  9. display(actions.head(),actions.shape)

        

用户对同类别下其他商品的行为计数

  1. prefix = 'action_before_%s' % 3
  2. actions[prefix+'_1_y'] = actions[prefix+'_1.0_y'] - actions[prefix+'_1.0_x']
  3. display(actions.head(),actions.shape)
  4. del actions
  5. gc.collect()

        

3.6.2  用户-行为

3.6.2.1  累积用户特征

  • 分时间段
  • 用户不同行为的
    • 购买转化率
    • 均值

3.6.2.1.1  函数定义

  1. def get_accumulate_user_feat(end_date, all_actions, day):
  2. start_date = datetime.strptime(end_date, '%Y-%m-%d') - timedelta(days=day)
  3. start_date = start_date.strftime('%Y-%m-%d')
  4. prefix = 'user_action_%s' % day
  5. actions = get_actions(start_date, end_date, all_actions)
  6. df = pd.get_dummies(actions['type'], prefix=prefix) # 独热编码
  7. actions['date'] = pd.to_datetime(actions['time']).apply(lambda x: x.date())
  8. actions = pd.concat([actions[['user_id', 'date']], df], axis=1)
  9. del df
  10. gc.collect()
  11. # 分组统计,按用户分组,统计用户各项行为的转化率、均值
  12. actions = actions.groupby(['user_id'], as_index=False).sum()
  13. actions[prefix + '_1_ratio'] = np.log(1 + actions[prefix + '_4.0']) - np.log(1 + actions[prefix +'_1.0'])
  14. actions[prefix + '_2_ratio'] = np.log(1 + actions[prefix + '_4.0']) - np.log(1 + actions[prefix +'_2.0'])
  15. actions[prefix + '_3_ratio'] = np.log(1 + actions[prefix + '_4.0']) - np.log(1 + actions[prefix +'_3.0'])
  16. actions[prefix + '_5_ratio'] = np.log(1 + actions[prefix + '_4.0']) - np.log(1 + actions[prefix +'_5.0'])
  17. actions[prefix + '_6_ratio'] = np.log(1 + actions[prefix + '_4.0']) - np.log(1 + actions[prefix +'_6.0'])
  18. # 均值
  19. actions[prefix + '_1_mean'] = actions[prefix + '_1.0'] / day
  20. actions[prefix + '_2_mean'] = actions[prefix + '_2.0'] / day
  21. actions[prefix + '_3_mean'] = actions[prefix + '_3.0'] / day
  22. actions[prefix + '_4_mean'] = actions[prefix + '_4.0'] / day
  23. actions[prefix + '_5_mean'] = actions[prefix + '_5.0'] / day
  24. actions[prefix + '_6_mean'] = actions[prefix + '_6.0'] / day
  25. return actions

代码测试

np.log2(16) - np.log2(32)      # -1.0

3.6.2.1.2  代码解读

加载一定时间段内所有数据

  1. prefix = 'user_action_%s' % 3
  2. all_actions = get_all_action()
  3. start_date = '2016-02-01'
  4. end_date = datetime.strptime(start_date, '%Y-%m-%d') + timedelta(days=3)
  5. end_date = end_date.strftime('%Y-%m-%d')
  6. # 获取一定时间段内数据
  7. actions = get_actions(start_date, end_date, all_actions)
  8. display(actions.head(),actions.shape)
  9. del all_actions
  10. gc.collect()

        

用户行为统计计数

  1. df = pd.get_dummies(actions['type'], prefix=prefix)
  2. display(df.head(),df.shape)
  3. actions['date'] = pd.to_datetime(actions['time']).apply(lambda x: x.date())
  4. actions = pd.concat([actions[['user_id', 'date']], df], axis=1)
  5. actions = actions.groupby(['user_id'],as_index=False).sum()
  6. display(actions.head(),actions.shape)

        

不同行为转购率和均值

  1. actions[prefix + '_1_ratio'] = np.log(1 + actions[prefix + '_4.0']) - np.log(1 + actions[prefix +'_1.0'])
  2. actions[prefix + '_1_mean'] = actions[prefix + '_1.0'] / 3
  3. actions.head(20)

3.6.2.2  用户近期行为特征

在上面针对用户进行累积特征提取的基础上,分别提取用户近一个月、近三天的特征,然后提取一个月内用户除去最近三天的行为占据一个月的行为的比重

  1. def get_recent_user_feat(end_date, all_actions):
  2. actions_3 = get_accumulate_user_feat(end_date, all_actions, 3)
  3. actions_30 = get_accumulate_user_feat(end_date, all_actions, 30)
  4. actions = pd.merge(actions_3, actions_30, how ='left', on='user_id')
  5. del actions_3
  6. del actions_30
  7. gc.collect()
  8. actions['recent_action1'] = np.log(1 + actions['user_action_30_1.0']-actions['user_action_3_1.0']) - np.log(1 + actions['user_action_30_1.0'])
  9. actions['recent_action2'] = np.log(1 + actions['user_action_30_2.0']-actions['user_action_3_2.0']) - np.log(1 + actions['user_action_30_2.0'])
  10. actions['recent_action3'] = np.log(1 + actions['user_action_30_3.0']-actions['user_action_3_3.0']) - np.log(1 + actions['user_action_30_3.0'])
  11. actions['recent_action4'] = np.log(1 + actions['user_action_30_4.0']-actions['user_action_3_4.0']) - np.log(1 + actions['user_action_30_4.0'])
  12. actions['recent_action5'] = np.log(1 + actions['user_action_30_5.0']-actions['user_action_3_5.0']) - np.log(1 + actions['user_action_30_5.0'])
  13. actions['recent_action6'] = np.log(1 + actions['user_action_30_6.0']-actions['user_action_3_6.0']) - np.log(1 + actions['user_action_30_6.0'])
  14. return actions

3.6.2.3  用户对大类别商品交互行为特征工程

  • 用户对各个类别的各项行为操作统计
  • 用户对各个类别操作行为统计占对所有类别操作行为统计的比重

3.6.2.3.1  函数定义

  1. #增加了用户对不同类别的交互特征
  2. def get_user_cate_feature(start_date, end_date, all_actions):
  3. actions = get_actions(start_date, end_date, all_actions)
  4. actions = actions[['user_id', 'cate', 'type']]
  5. df = pd.get_dummies(actions['type'], prefix='type')
  6. actions = pd.concat([actions[['user_id', 'cate']], df], axis=1)
  7. actions = actions.groupby(['user_id', 'cate']).sum()
  8. actions = actions.unstack()
  9. actions.columns = actions.columns.swaplevel(0, 1)
  10. actions.columns = actions.columns.droplevel()
  11. actions.columns = [
  12. 'cate_4_type1', 'cate_5_type1', 'cate_6_type1', 'cate_7_type1',
  13. 'cate_8_type1', 'cate_9_type1', 'cate_10_type1', 'cate_11_type1',
  14. 'cate_4_type2', 'cate_5_type2', 'cate_6_type2', 'cate_7_type2',
  15. 'cate_8_type2', 'cate_9_type2', 'cate_10_type2', 'cate_11_type2',
  16. 'cate_4_type3', 'cate_5_type3', 'cate_6_type3', 'cate_7_type3',
  17. 'cate_8_type3', 'cate_9_type3', 'cate_10_type3', 'cate_11_type3',
  18. 'cate_4_type4', 'cate_5_type4', 'cate_6_type4', 'cate_7_type4',
  19. 'cate_8_type4', 'cate_9_type4', 'cate_10_type4', 'cate_11_type4',
  20. 'cate_4_type5', 'cate_5_type5', 'cate_6_type5', 'cate_7_type5',
  21. 'cate_8_type5', 'cate_9_type5', 'cate_10_type5', 'cate_11_type5',
  22. 'cate_4_type6', 'cate_5_type6', 'cate_6_type6', 'cate_7_type6',
  23. 'cate_8_type6', 'cate_9_type6', 'cate_10_type6', 'cate_11_type6']
  24. actions = actions.fillna(0)
  25. actions['cate_action_sum'] = actions.sum(axis=1)
  26. # 用户对各个类别操作行为统计占对所有类别操作行为统计的比重
  27. actions['cate8_percentage'] = (
  28. actions['cate_8_type1'] + actions['cate_8_type2'] +
  29. actions['cate_8_type3'] + actions['cate_8_type4'] +
  30. actions['cate_8_type5'] + actions['cate_8_type6']) / actions['cate_action_sum']
  31. actions['cate4_percentage'] = (
  32. actions['cate_4_type1'] + actions['cate_4_type2'] +
  33. actions['cate_4_type3'] + actions['cate_4_type4'] +
  34. actions['cate_4_type5'] + actions['cate_4_type6']) / actions['cate_action_sum']
  35. actions['cate5_percentage'] = (
  36. actions['cate_5_type1'] + actions['cate_5_type2'] +
  37. actions['cate_5_type3'] + actions['cate_5_type4'] +
  38. actions['cate_5_type5'] + actions['cate_5_type6']) / actions['cate_action_sum']
  39. actions['cate6_percentage'] = (
  40. actions['cate_6_type1'] + actions['cate_6_type2'] +
  41. actions['cate_6_type3'] + actions['cate_6_type4'] +
  42. actions['cate_6_type5'] + actions['cate_6_type6']) / actions['cate_action_sum']
  43. actions['cate7_percentage'] = (
  44. actions['cate_7_type1'] + actions['cate_7_type2'] +
  45. actions['cate_7_type3'] + actions['cate_7_type4'] +
  46. actions['cate_7_type5'] + actions['cate_7_type6']) / actions['cate_action_sum']
  47. actions['cate9_percentage'] = (
  48. actions['cate_9_type1'] + actions['cate_9_type2'] +
  49. actions['cate_9_type3'] + actions['cate_9_type4'] +
  50. actions['cate_9_type5'] + actions['cate_9_type6']) / actions['cate_action_sum']
  51. actions['cate10_percentage'] = (
  52. actions['cate_10_type1'] + actions['cate_10_type2'] +
  53. actions['cate_10_type3'] + actions['cate_10_type4'] +
  54. actions['cate_10_type5'] + actions['cate_10_type6']) / actions['cate_action_sum']
  55. actions['cate11_percentage'] = (
  56. actions['cate_11_type1'] + actions['cate_11_type2'] +
  57. actions['cate_11_type3'] + actions['cate_11_type4'] +
  58. actions['cate_11_type5'] + actions['cate_11_type6']) / actions['cate_action_sum']
  59. actions['cate8_type1_percentage'] = np.log(
  60. 1 + actions['cate_8_type1']) - np.log(
  61. 1 + actions['cate_8_type1'] + actions['cate_4_type1'] +
  62. actions['cate_5_type1'] + actions['cate_6_type1'] +
  63. actions['cate_7_type1'] + actions['cate_9_type1'] +
  64. actions['cate_10_type1'] + actions['cate_11_type1'])
  65. actions['cate8_type2_percentage'] = np.log(
  66. 1 + actions['cate_8_type2']) - np.log(
  67. 1 + actions['cate_8_type2'] + actions['cate_4_type2'] +
  68. actions['cate_5_type2'] + actions['cate_6_type2'] +
  69. actions['cate_7_type2'] + actions['cate_9_type2'] +
  70. actions['cate_10_type2'] + actions['cate_11_type2'])
  71. actions['cate8_type3_percentage'] = np.log(
  72. 1 + actions['cate_8_type3']) - np.log(
  73. 1 + actions['cate_8_type3'] + actions['cate_4_type3'] +
  74. actions['cate_5_type3'] + actions['cate_6_type3'] +
  75. actions['cate_7_type3'] + actions['cate_9_type3'] +
  76. actions['cate_10_type3'] + actions['cate_11_type3'])
  77. actions['cate8_type4_percentage'] = np.log(
  78. 1 + actions['cate_8_type4']) - np.log(
  79. 1 + actions['cate_8_type4'] + actions['cate_4_type4'] +
  80. actions['cate_5_type4'] + actions['cate_6_type4'] +
  81. actions['cate_7_type4'] + actions['cate_9_type4'] +
  82. actions['cate_10_type4'] + actions['cate_11_type4'])
  83. actions['cate8_type5_percentage'] = np.log(
  84. 1 + actions['cate_8_type5']) - np.log(
  85. 1 + actions['cate_8_type5'] + actions['cate_4_type5'] +
  86. actions['cate_5_type5'] + actions['cate_6_type5'] +
  87. actions['cate_7_type5'] + actions['cate_9_type5'] +
  88. actions['cate_10_type5'] + actions['cate_11_type5'])
  89. actions['cate8_type6_percentage'] = np.log(
  90. 1 + actions['cate_8_type6']) - np.log(
  91. 1 + actions['cate_8_type6'] + actions['cate_4_type6'] +
  92. actions['cate_5_type6'] + actions['cate_6_type6'] +
  93. actions['cate_7_type6'] + actions['cate_9_type6'] +
  94. actions['cate_10_type6'] + actions['cate_11_type6'])
  95. actions['user_id'] = actions.index
  96. actions = actions[[
  97. 'user_id', 'cate8_percentage', 'cate4_percentage', 'cate5_percentage',
  98. 'cate6_percentage', 'cate7_percentage', 'cate9_percentage',
  99. 'cate10_percentage', 'cate11_percentage', 'cate8_type1_percentage',
  100. 'cate8_type2_percentage', 'cate8_type3_percentage',
  101. 'cate8_type4_percentage', 'cate8_type5_percentage',
  102. 'cate8_type6_percentage']]
  103. return actions

3.6.2.3.2  代码解读

加载一定时间段内所有数据

  1. prefix = 'user_action_%s' % 3
  2. all_actions = get_all_action()
  3. start_date = '2016-02-01'
  4. end_date = datetime.strptime(start_date, '%Y-%m-%d') + timedelta(days=3)
  5. end_date = end_date.strftime('%Y-%m-%d')
  6. # 获取一定时间段内数据
  7. actions = get_actions(start_date, end_date, all_actions)
  8. actions = actions[['user_id', 'cate', 'type']]
  9. display(actions.head(),actions.shape)
  10. del all_actions
  11. gc.collect()

        

用户类别分组聚合

  1. df = pd.get_dummies(actions['type'], prefix='type')
  2. actions = pd.concat([actions[['user_id', 'cate']], df], axis=1)
  3. actions = actions.groupby(['user_id', 'cate']).sum()
  4. actions.head(20)

        

行索引变列索引

  1. actions = actions.unstack()
  2. actions.head()

        

交换列索引层级

  1. actions.columns = actions.columns.swaplevel(0, 1)
  2. actions.head()

        

 删除第一层列索引

  1. actions.columns = actions.columns.droplevel()
  2. actions.head()

        

 列索引重新赋值

  1. actions.columns = [
  2. 'cate_4_type1', 'cate_5_type1', 'cate_6_type1', 'cate_7_type1',
  3. 'cate_8_type1', 'cate_9_type1', 'cate_10_type1', 'cate_11_type1',
  4. 'cate_4_type2', 'cate_5_type2', 'cate_6_type2', 'cate_7_type2',
  5. 'cate_8_type2', 'cate_9_type2', 'cate_10_type2', 'cate_11_type2',
  6. 'cate_4_type3', 'cate_5_type3', 'cate_6_type3', 'cate_7_type3',
  7. 'cate_8_type3', 'cate_9_type3', 'cate_10_type3', 'cate_11_type3',
  8. 'cate_4_type4', 'cate_5_type4', 'cate_6_type4', 'cate_7_type4',
  9. 'cate_8_type4', 'cate_9_type4', 'cate_10_type4', 'cate_11_type4',
  10. 'cate_4_type5', 'cate_5_type5', 'cate_6_type5', 'cate_7_type5',
  11. 'cate_8_type5', 'cate_9_type5', 'cate_10_type5', 'cate_11_type5',
  12. 'cate_4_type6', 'cate_5_type6', 'cate_6_type6', 'cate_7_type6',
  13. 'cate_8_type6', 'cate_9_type6', 'cate_10_type6', 'cate_11_type6']
  14. actions.head()

        

 空数据填充并求和

  1. actions = actions.fillna(0)
  2. display(actions.head())
  3. actions['cate_action_sum'] = actions.sum(axis=1)
  4. actions.head()

        

用户对类别8操作行为统计占对所有类别操作行为统计的比重

  1. actions['cate8_percentage'] = (
  2. actions['cate_8_type1'] + actions['cate_8_type2'] +
  3. actions['cate_8_type3'] + actions['cate_8_type4'] +
  4. actions['cate_8_type5'] + actions['cate_8_type6']) / actions['cate_action_sum']
  5. actions.head()

        

类别8-交互1占总交互1比例

  1. actions['cate8_type1_percentage'] = np.log(1 + actions['cate_8_type1'])- np.log(
  2. 1 + actions['cate_8_type1'] + actions['cate_4_type1'] +
  3. actions['cate_5_type1'] + actions['cate_6_type1'] +
  4. actions['cate_7_type1'] + actions['cate_9_type1'] +
  5. actions['cate_10_type1'] + actions['cate_11_type1'])
  6. actions.head()

        

  1. actions['user_id'] = actions.index
  2. actions.head()

        

3.6.3  商品-行为

3.6.3.1  累积商品特征(函数定义)

  • 分时间段
  • 针对商品的不同行为的
    • 购买转化率
    • 均值
  1. def get_accumulate_product_feat(start_date, end_date, all_actions):
  2. actions = get_actions(start_date, end_date, all_actions)
  3. df = pd.get_dummies(actions['type'], prefix='product_action')
  4. # 按照商品-日期分组,计算某个时间段该商品的各项行为的标准差
  5. actions['date'] = pd.to_datetime(actions['time']).apply(lambda x: x.date())
  6. actions = pd.concat([actions[['sku_id', 'date']], df], axis=1)
  7. actions = actions.groupby(['sku_id'], as_index=False).sum()
  8. # 时间间隔,起始时间 终止时间,间隔
  9. days_interal = (datetime.strptime(end_date, '%Y-%m-%d') - datetime.strptime(start_date, '%Y-%m-%d')).days
  10. # 针对商品分组,计算购买转化率
  11. actions['product_action_1_ratio'] = np.log(1 + actions['product_action_4.0']) - np.log(1 + actions['product_action_1.0'])
  12. actions['product_action_2_ratio'] = np.log(1 + actions['product_action_4.0']) - np.log(1 + actions['product_action_2.0'])
  13. actions['product_action_3_ratio'] = np.log(1 + actions['product_action_4.0']) - np.log(1 + actions['product_action_3.0'])
  14. actions['product_action_5_ratio'] = np.log(1 + actions['product_action_4.0']) - np.log(1 + actions['product_action_5.0'])
  15. actions['product_action_6_ratio'] = np.log(1 + actions['product_action_4.0']) - np.log(1 + actions['product_action_6.0'])
  16. # 计算各种行为的均值
  17. actions['product_action_1_mean'] = actions['product_action_1.0'] / days_interal
  18. actions['product_action_2_mean'] = actions['product_action_2.0'] / days_interal
  19. actions['product_action_3_mean'] = actions['product_action_3.0'] / days_interal
  20. actions['product_action_4_mean'] = actions['product_action_4.0'] / days_interal
  21. actions['product_action_5_mean'] = actions['product_action_5.0'] / days_interal
  22. actions['product_action_6_mean'] = actions['product_action_6.0'] / days_interal
  23. return actions

3.6.3.2  代码解读

加载一定时间段内所有数据

  1. prefix = 'user_action_%s' % 3
  2. all_actions = get_all_action()
  3. start_date = '2016-02-01'
  4. end_date = datetime.strptime(start_date, '%Y-%m-%d') + timedelta(days=3)
  5. end_date = end_date.strftime('%Y-%m-%d')
  6. # 获取一定时间段内数据
  7. actions = get_actions(start_date, end_date, all_actions)
  8. display(actions.head(),actions.shape)
  9. del all_actions
  10. gc.collect()

        

 商品分组聚合

  1. df = pd.get_dummies(actions['type'], prefix='product_action')
  2. actions['date'] = pd.to_datetime(actions['time']).apply(lambda x: x.date())
  3. actions = pd.concat([actions[['sku_id', 'date']], df], axis=1)
  4. actions = actions.groupby(['sku_id'], as_index=False).sum()
  5. actions.head()

        

actions.head(50)

        

商品不同行为的转购率和均值计算

  1. days_interal = (datetime.strptime(end_date, '%Y-%m-%d') -
  2. datetime.strptime(start_date, '%Y-%m-%d')).days
  3. print('时间间隔',days_interal)
  4. actions['product_action_1_ratio'] = np.log(1 + actions['product_action_4.0']) - np.log(1 + actions['product_action_1.0'])
  5. actions['product_action_1_mean'] = actions['product_action_1.0'] / days_interal
  6. actions.head()

        

 3.6.4  类别特征

分时间段下各个商品类别的

  • 购买转化率
  • 均值
  1. def get_accumulate_cate_feat(start_date, end_date, all_actions):
  2. actions = get_actions(start_date, end_date, all_actions)
  3. actions['date'] = pd.to_datetime(actions['time']).apply(lambda x: x.date())
  4. df = pd.get_dummies(actions['type'], prefix='cate_action')
  5. actions = pd.concat([actions[['cate','date']], df], axis=1)
  6. # 按照类别分组,统计各个商品类别下行为的转化率
  7. actions = actions.groupby(['cate'], as_index=False).sum()
  8. days_interal = (datetime.strptime(end_date, '%Y-%m-%d')-datetime.strptime(start_date, '%Y-%m-%d')).days
  9. actions['cate_action_1_ratio'] =(np.log(1 + actions['cate_action_4.0']) - np.log(1 + actions['cate_action_1.0']))
  10. actions['cate_action_2_ratio'] =(np.log(1 + actions['cate_action_4.0']) - np.log(1 + actions['cate_action_2.0']))
  11. actions['cate_action_3_ratio'] =(np.log(1 + actions['cate_action_4.0']) - np.log(1 + actions['cate_action_3.0']))
  12. actions['cate_action_5_ratio'] =(np.log(1 + actions['cate_action_4.0']) - np.log(1 + actions['cate_action_5.0']))
  13. actions['cate_action_6_ratio'] =(np.log(1 + actions['cate_action_4.0']) - np.log(1 + actions['cate_action_6.0']))
  14. # 按照类别分组,统计各个商品类别下行为在一段时间的均值
  15. actions['cate_action_1_mean'] = actions['cate_action_1.0'] / days_interal
  16. actions['cate_action_2_mean'] = actions['cate_action_2.0'] / days_interal
  17. actions['cate_action_3_mean'] = actions['cate_action_3.0'] / days_interal
  18. actions['cate_action_4_mean'] = actions['cate_action_4.0'] / days_interal
  19. actions['cate_action_5_mean'] = actions['cate_action_5.0'] / days_interal
  20. actions['cate_action_6_mean'] = actions['cate_action_6.0'] / days_interal
  21. return actions

3.7  构造训练集/测试集

3.7.1  构造训练集

  • 标签,采用滑动窗口的方式,构造训练集的时候针对产生购买的行为标记为1
  • 整合特征

3.7.1.1  函数调用数据查看

购买行为标记

  1. def get_labels(start_date, end_date, all_actions):
  2. actions = get_actions(start_date, end_date, all_actions)
  3. # 修改为预测购买了商品8的用户预测
  4. actions = actions[(actions['type'] == 4) & (actions['cate'] == 8)]
  5. actions = actions.groupby(['user_id', 'sku_id'], as_index=False).sum()
  6. actions['label'] = 1
  7. actions = actions[['user_id', 'sku_id', 'label']]
  8. return actions

查看用户数据结构

  1. # 查看全部数据
  2. all_actions = get_all_action()
  3. print ("get all actions!")
  4. display(all_actions.head(),all_actions.shape)
  5. del all_actions
  6. gc.collect()

        

  1. # 用户特征
  2. user = get_basic_user_feat()
  3. print ('get_basic_user_feat finsihed')
  4. display(user.head(),user.shape)
  5. del user
  6. gc.collect()

        

  1. # 商品基本特征
  2. product = get_basic_product_feat()
  3. print ('get_basic_product_feat finsihed')
  4. display(product.head(),product.shape)
  5. del product
  6. gc.collect()

        

  1. # 用户近期行为特征
  2. start_date = '2016-02-01'
  3. end_date = datetime.strptime(start_date, '%Y-%m-%d') + timedelta(days=3)
  4. end_date = end_date.strftime('%Y-%m-%d') # 转为字符串
  5. all_actions = get_all_action()
  6. user_acc = get_recent_user_feat(end_date, all_actions)
  7. display(user_acc.head(),user_acc.shape)
  8. del all_actions,user_acc
  9. gc.collect()
  10. print ('get_recent_user_feat finsihed')

        

3.7.1.2  构造训练集

特征工程-构建函数创建新特征

  1. start_date = '2016-02-01'
  2. end_date = datetime.strptime(start_date, '%Y-%m-%d') + timedelta(days=3)
  3. end_date = end_date.strftime('%Y-%m-%d') # 转为字符串
  4. all_actions = get_all_action()
  5. user_cate = get_user_cate_feature(start_date, end_date, all_actions)
  6. display(user_cate.head())
  7. user_cate = user_cate.reset_index(drop = True)# 处理索引
  8. display(user_cate.head())
  9. del all_actions,user_cate
  10. gc.collect()

        

  1. def make_actions(user, product, all_actions, start_date):
  2. end_date = datetime.strptime(start_date, '%Y-%m-%d') + timedelta(days=3)
  3. end_date = end_date.strftime('%Y-%m-%d')
  4. # 修正get_accumulate_product_feat,get_accumulate_cate_feat的时间跨度
  5. start_days = datetime.strptime(end_date, '%Y-%m-%d') - timedelta(days=30)
  6. start_days = start_days.strftime('%Y-%m-%d')
  7. print (end_date)
  8. user_acc = get_recent_user_feat(end_date, all_actions)
  9. print ('get_recent_user_feat finsihed')
  10. user_cate = get_user_cate_feature(start_date, end_date, all_actions)
  11. user_cate = user_cate.reset_index(drop = True)# 处理索引
  12. print ('get_user_cate_feature finished')
  13. product_acc = get_accumulate_product_feat(start_days, end_date, all_actions)
  14. print ('get_accumulate_product_feat finsihed')
  15. cate_acc = get_accumulate_cate_feat(start_days, end_date, all_actions)
  16. print ('get_accumulate_cate_feat finsihed')
  17. comment_acc = get_comments_product_feat(end_date)
  18. print ('get_comments_product_feat finished')
  19. # 标记
  20. test_start_date = end_date
  21. test_end_date = datetime.strptime(test_start_date, '%Y-%m-%d') + timedelta(days=5)
  22. test_end_date = test_end_date.strftime('%Y-%m-%d')
  23. labels = get_labels(test_start_date, test_end_date, all_actions)
  24. print ("get labels")
  25. actions = None
  26. for i in (3, 5, 7, 10, 15, 21, 30):
  27. start_days = datetime.strptime(end_date, '%Y-%m-%d') - timedelta(days=i)
  28. start_days = start_days.strftime('%Y-%m-%d')
  29. if actions is None:
  30. actions = get_action_feat(start_days, end_date, all_actions, i)
  31. else:
  32. # 注意这里的拼接key
  33. actions = pd.merge(actions, get_action_feat(start_days, end_date, all_actions, i),
  34. how='left',
  35. on=['user_id', 'sku_id', 'cate'])
  36. actions = pd.merge(actions, user, how='left', on='user_id')
  37. actions = pd.merge(actions, user_acc, how='left', on='user_id')
  38. actions = pd.merge(actions, user_cate, how='left', on='user_id')
  39. # 注意这里的拼接key
  40. actions = pd.merge(actions, product, how='left', on=['sku_id', 'cate'])
  41. actions = pd.merge(actions, product_acc, how='left', on='sku_id')
  42. actions = pd.merge(actions, cate_acc, how='left', on='cate')
  43. actions = pd.merge(actions, comment_acc, how='left', on='sku_id')
  44. actions = pd.merge(actions, labels, how='left', on=['user_id', 'sku_id'])
  45. # 主要是填充拼接商品基本特征、评论特征、标签之后的空值
  46. actions = actions.fillna(0)
  47. # 采样
  48. action_postive = actions[actions['label'] == 1] # 购买
  49. action_negative = actions[actions['label'] == 0] # 没有购买
  50. del actions
  51. neg_len = len(action_postive) * 10 # 负样本是正样本数量的10倍
  52. action_negative = action_negative.sample(n=neg_len)
  53. action_sample = pd.concat([action_postive, action_negative], ignore_index=True)
  54. return action_sample

构造训练数据集

  1. def make_train_set(start_date, setNums ,f_path, all_actions):
  2. train_actions = None
  3. user = get_basic_user_feat()
  4. print ('get_basic_user_feat finsihed')
  5. product = get_basic_product_feat()
  6. print ('get_basic_product_feat finsihed')
  7. # 滑窗,构造多组训练集/验证集
  8. for i in range(setNums):
  9. print(start_date)
  10. if train_actions is None:
  11. train_actions = make_actions(user, product, all_actions, start_date)
  12. else:
  13. train_actions = pd.concat([train_actions,
  14. make_actions(user, product, all_actions, start_date)],
  15. ignore_index=True)
  16. # 接下来每次移动一天
  17. start_date = datetime.strptime(start_date, '%Y-%m-%d') + timedelta(days=1)
  18. start_date = start_date.strftime('%Y-%m-%d')
  19. print ("round {0}/{1} over!".format(i+1, setNums))
  20. train_actions.to_csv(f_path, index=False)
  21. del train_actions
  22. # 训练集 & 验证集
  23. start_date = '2016-03-01'
  24. all_actions = get_all_action()
  25. make_train_set(start_date, 20, 'train_set.csv',all_actions)
  26. del all_actions
  27. gc.collect()

        

3.7.2  构造测试集

  1. # 测试集
  2. val_start_date = '2016-04-01'
  3. all_actions = get_all_action()
  4. make_train_set(val_start_date, 3, 'test_set.csv',all_actions)
  5. del all_actions
  6. gc.collect()

四  数据建模

4.1  模型导入

  1. import pandas as pd
  2. import numpy as np
  3. import xgboost as xgb
  4. from sklearn.model_selection import train_test_split
  5. from matplotlib import pylab as plt
  6. import gc

4.2  数据加载

  1. data = pd.read_csv('train_set.csv')
  2. display(data.head(), data.shape)
  3. data_X = data.loc[:, data.columns != 'label']
  4. data_y = data.loc[:, data.columns == 'label']
  5. X_train,X_val,y_train, y_val = train_test_split(data_X, data_y, test_size= 0.2, random_state= 0) # validation 验证
  6. users = X_val[['user_id', 'sku_id', 'cate']].copy()
  7. # 删除用户ID和商品编号, 这两列属于自然数编号, 对预测结果影响不大

        

  1. del X_train['user_id']
  2. del X_train['sku_id']
  3. display(X_train.head(), X_train.shape)
  4. display(X_val.head(), X_val.shape)
  5. del data, data_X, data_y
  6. gc.collect()

4.3  Xgboost建模

  1. dtrain = xgb.DMatrix(X_train, label=y_train)
  2. dvalid = xgb.DMatrix(X_val, label=y_val)
  3. '''
  4. 'min_child_weight': 5,孩子节点中最小的样本权重和。
  5. 如果一个叶子节点的样本权重和小于min_child_weight则拆分过程结束。即调大这个参数能够控制过拟合。
  6. gamma = 0.1,# 树的叶子节点上做进一步分区所需的最小损失减少,越大越保守,一般0.1 0.2这样子
  7. scale_pos_weight =10 # 如果取值大于0的话,在类别样本不平衡的情况下有助于快速收敛,平衡正负权重
  8. 'eta': 0.1, # 如同学习率'''
  9. param = {'n_estimators': 4000, 'max_depth': 3, 'min_child_weight': 5, 'gamma': 0.1,
  10. 'subsample': 0.9,'colsample_bytree': 0.8, 'scale_pos_weight':10, 'eta': 0.1,
  11. 'objective': 'binary:logistic','eval_metric':['auc','error']}
  12. num_round = param['n_estimators']
  13. evallist = [(dtrain, 'train'), (dvalid, 'eval')]
  14. bst = xgb.train(param, dtrain, num_round, evallist, early_stopping_rounds=10)
  15. bst.save_model('bst.model')

4.4  特征重要性

  1. def feature_importance(bst_xgb):
  2. importance = bst_xgb.get_fscore()
  3. importance = sorted(importance.items(), key= lambda x :x[1], reverse= True)
  4. df = pd.DataFrame(importance, columns= ['feature', 'fscore'])
  5. df['fscore'] = df['fscore'] / df['fscore'].sum()
  6. file_name = 'feature_importance_.csv'
  7. df.to_csv(file_name, index = False)
  8. feature_importancce(bst)
  9. feature_importance_ = pd.read_csv('feature_importance_.csv')
  10. feature_importance_.head()

4.5  算法预测验证数据

查看验证数据

算法预测

  1. X_val_DMatrix = xgb.DMatrix(X_val)
  2. y_pred = bst.predict(X_val_DMatrix)
  3. X_val['pred_label'] = y_pred
  4. X_val.head()

目标值概率转分类

  1. del label(column):
  2. if column['pred_label'] > 0.5:
  3. column['pred_label'] = 1
  4. else:
  5. column['pred_label'] = 0
  6. return column
  7. X_val = X_val.apply(label,axis = 1)
  8. X_val.head()

添加真实值用户ID商品编号

  1. X_val['true_label'] = y_val
  2. X_val['user_id'] = users['user_id']
  3. X_val['sku_id'] = users['sku_id']
  4. X_val.head()

4.6  模型评估(验证集)

购买用户统计

  1. # 所有购买用户
  2. all_user_set = X_val[X_val['true_label'] == 1]['user_id'].unique()
  3. print(len(all_user_set))
  4. # 所有预测购买的用户
  5. all_user_test_set = X_val[X_val['pred_label'] ==1]['user_id'].unique()
  6. print(len(all_user_test_set))

准备召回率

  1. pos, neg = 0,0
  2. for user_id in all_user_test_set:
  3. if user_id in all_user_set:
  4. pos +=1
  5. else:
  6. neg +=1
  7. all_user_acc = 1.0 * pos / (pos + neg)
  8. all_user_recall = 1.0 * pos / len(all_user_set)
  9. print('所有用户中预测购买用户的准确率' + str(all_user_acc))
  10. print('所有用户中预测购买用户的召回率' + str(all_user_recall))

实际商品对 准确率 召回率(更加精细:用户-商品ID)

  1. # 所有预测购买用户商品对应关系
  2. all_user_test_item_pair = X_val[X_val['pred_label'] == 1]['user_id'].map(str) + '-' + X_val[X_val['pred_label'] == 1]['sku_id'].map(str)
  3. all_user_test_item_pair = np.array(all_user_test_item_pair)
  4. print(len(all_user_test_item_pair))
  5. # 所有实际商品对
  6. all_user_item_pair = X_val[X_val['true_label'] ==1]['user_id'].map(str) + '-' + X_val[X_val['true_label'] ==1]['sku_id'].map(str)
  7. all_user_item_pair = np.array(all_user_item_pair)
  8. pos, neg = 0, 0
  9. for user_item_pair in all_user_test_item_pair:
  10. if user_item_pair in all_user_item_pair:
  11. pos += 1
  12. else:
  13. neg += 1
  14. all_item_acc = pos / (pos + neg)
  15. all_item_recall = pos / len(all_user_item_pair)
  16. print('所有用户中预测购买用户的准确率' + str(all_item_acc))
  17. print('所有用户中预测购买用户的召回率' + str(all_item_recall))

4.7  测试数据

数据加载

  1. X_data = pd.read_csv('test_set.csv')
  2. display(X_data.head())
  3. X_test, y_test = X_data.iloc[:,:-1], X_data.iloc[:,-1]

算法预测

  1. users = X_test[['user_id', 'sku_id', 'cate']].copy()
  2. del X_test['user_id']
  3. del X_test['sku_id']
  4. X_test_DMatrix = xgb.DMatrix(X_test)
  5. y_pred = bst.predict(X_test_DMatrix)
  6. X_test['pred_label'] = y_pred
  7. X_test.head()

目标值概率转分类

  1. def label(column):
  2. if column['pred_label'] > 0.5:
  3. column['pred_label'] = 1
  4. else:
  5. column['pred_label'] = 0
  6. return column
  7. X_test = X_test.apply(label, axis = 1)
  8. X_test.head()

添加真实用户ID 商品信息

  1. X_test['true_label'] = y_test
  2. X_test['user_id'] = users['user_id']
  3. X_test['sku_id'] = users['sku_id']
  4. X_test.head()

4.8  模型评估【测试集】

购买用户统计

  1. # 所有购买用户
  2. all_user_set = X_test[X_test['true_label'] == 1]['user_id'].unique()
  3. print(len(all_user_set))
  4. # 所有预测购买的用户
  5. all_user_test_set = X_test[X_test['pred_label'] == 1]['user_id'].unique()
  6. print(len(all_user_test_set))

准确率 召回率

  1. pos, neg = 0,0
  2. for user_id in all_user_test_set:
  3. if user_id in all_user_set:
  4. pos += 1
  5. else:
  6. neg += 1
  7. all_user_acc = pos /(pos + neg)
  8. all_user_recall = pos / len(all_user_set)
  9. print('所有用户中预测购买用户的准确率' + str(all_user_acc))
  10. print('所有用户中预测购买用户的召回率' + str(all_user_recall))

实际商品对 准确率 召回率

  1. # 所有预测购买用户商品对应关系
  2. all_user_test_item_pair = X_test[X_test['pred_label'] == 1]['user_id'].map(str) + '-' + X_test[X_test['pred_label'] == 1]['sku_id'].map(str)
  3. all_user_test_item_pair = np.array(all_user_test_item_pair)
  4. print(len(all_user_test_item_pair))
  5. # 所有实际商品对
  6. all_user_item_pair = X_test[X_test['true_label'] ==1]['user_id'].map(str) + '-' + X_test[X_test['true_label'] ==1]['sku_id'].map(str)
  7. all_user_item_pair = np.array(all_user_item_pair)
  8. pos, neg = 0, 0
  9. for user_item_pair in all_user_test_item_pair:
  10. if user_item_pair in all_user_item_pair:
  11. pos += 1
  12. else:
  13. neg += 1
  14. all_item_acc = pos / (pos + neg)
  15. all_item_recall = pos / len(all_user_item_pair)
  16. print('所有用户中预测购买用户的准确率' + str(all_item_acc))
  17. print('所有用户中预测购买用户的召回率' + str(all_item_recall))

声明:本文内容由网友自发贡献,不代表【wpsshop博客】立场,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:https://www.wpsshop.cn/w/从前慢现在也慢/article/detail/329410
推荐阅读
相关标签
  

闽ICP备14008679号