赞
踩
因为新浪微博网页版爬虫比较困难,故采取用手机网页端爬取的方式
操作步骤如下:
1. 网页版登陆新浪微博
2.打开m.weibo.cn
3.查找自己感兴趣的话题,获取对应的数据接口链接
4.获取cookies和headers
# -*- coding: utf-8 -*-
import requests
import csv
import os
base_url = 'https://m.weibo.cn/api/comments/show?id=4131150395559419&page={page}'
cookies = {'Cookie':'xxx'}
headers = {'User-Agent':'xxx'}
path = os.getcwd()+"/weibo.csv"
csvfile = open(path, 'a+', encoding='utf-8',newline='')
writer = csv.writer(csvfile)
writer.writerow(('username','source','comment'))
for i in range(0,83):
try:
url = base_url.format(page=i)
resp = requests.get(url, headers=headers, cookies=cookies)</
赞
踩
赞
踩
赞
踩
赞
踩
赞
踩
赞
踩
赞
踩
赞
踩
赞
踩
赞
踩
赞
踩
赞
踩
赞
踩
赞
踩
赞
踩
赞
踩
赞
踩
赞
踩
赞
踩
赞
踩
Copyright © 2003-2013 www.wpsshop.cn 版权所有,并保留所有权利。