当前位置:   article > 正文

Scrapy框架爬取智联招聘网站上海地区python工作第一页(90条)

爬取招聘网站中关键词为“python”的全职职位的第一页内容

1. 创建项目:

  CMD下  

      scrapy startproject zhilianJob

   然后 cd zhilianJob ,  创建爬虫文件 job.py: scrapy genspider job xxx.com

2. settings.py 中:

  1. USER_AGENT = 'Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/65.0.3325.181 Safari/537.36'
  2. ROBOTSTXT_OBEY = False
  3. ITEM_PIPELINES = {
  4. 'zhilianJob.pipelines.ZhilianjobPipeline': 300,
  5. }

 

3. 爬虫文件job.py中:

  1. # -*- coding: utf-8 -*-
  2. import scrapy
  3. import json
  4. from zhilianJob.items import ZhilianjobItem
  5. class JobSpider(scrapy.Spider):
  6. name = 'job'
  7. # allowed_domains = ['www.sou.zhaopin.com']
  8. # start_urls可以简写成:https://fe-api.zhaopin.com/c/i/sou?pageSize=90&cityId=538&kw=python&kt=3
  9. start_urls = [
  10. 'https://fe-api.zhaopin.com/c/i/sou?pageSize=90&cityId=538&salary=0,0&workExperience=-1&education=-1&companyType=-1&employmentType=-1&jobWelfareTag=-1&kw=python&kt=3&=0&_v=0.02964699&x-zp-page-request-id=3e524df5d2b541dcb5ddb82028a5c1b6-1565749700925-710042&x-zp-client-id=2724abb6-fb33-43a0-af2e-f177d8a3e169']
  11. def parse(self, response):
  12. # print(response.text)
  13. data = json.loads(response.text)
  14. job = data['data']['results']
  15. # print(job)
  16. try:
  17. for j in job:
  18. item = ZhilianjobItem()
  19. item['job_name'] = j['jobName']
  20. item['job_firm'] = j['company']['name']
  21. item['job_firmPeople'] = j['company']['size']['name']
  22. item['job_salary'] = j['salary']
  23. item['job_type'] = j['jobType']['items'][0]['name']
  24. item['job_yaoqiu'] = j['eduLevel']['name'] + ',' + j['workingExp']['name']
  25. item['job_welfare'] = ','.join(j['welfare'])
  26. yield item
  27. except Exception as e:
  28. print(e)

 

4. items.py中:

  

  1. # -*- coding: utf-8 -*-
  2. # Define here the models for your scraped items
  3. #
  4. # See documentation in:
  5. # https://doc.scrapy.org/en/latest/topics/items.html
  6. import scrapy
  7. class ZhilianjobItem(scrapy.Item):
  8. # define the fields for your item here like:
  9. job_name = scrapy.Field() #工作名称
  10. job_firm = scrapy.Field() #公司名称
  11. job_firmPeople = scrapy.Field() #公司人数
  12. job_type = scrapy.Field() #工作类型
  13. job_salary = scrapy.Field() #薪水
  14. job_yaoqiu = scrapy.Field() #工作要求
  15. job_welfare = scrapy.Field() #福利
  16. pass

5. 创建数据库,根据items中字段对应即可

6 . 管道文件pipelines.py:

  1. import pymysql
  2. class ZhilianjobPipeline(object):
  3. conn = None
  4. mycursor = None
  5. def open_spider(self,spider):
  6. self.conn = pymysql.connect(host='172.16.25.37',port=3306,user='root',password='root',db='scrapy')
  7. # 获取游标
  8. self.mycursor = self.conn.cursor()
  9. print('正在清空之前的数据...')
  10. # 我只打算要第一页的数据,所以每次爬取都是最新的,要把数据库里的之前的数据要清空
  11. sql1 = "truncate table sh_python"
  12. self.mycursor.execute(sql1)
  13. print('已清空之前的数据,上海--python--第一页(90)...开始下载...')
  14. def process_item(self, item, spider):
  15. job_name = item['job_name']
  16. job_firm = item['job_firm']
  17. job_firmPeople = item['job_firmPeople']
  18. job_salary = item['job_salary']
  19. job_type = item['job_type']
  20. job_yaoqiu = item['job_yaoqiu']
  21. job_welfare = item['job_welfare']
  22. try:
  23. sql2 = "insert into sh_python VALUES (NULL ,'%s','%s','%s','%s','%s','%s','%s')"%(job_name,job_firm,job_firmPeople,job_salary,job_type,job_yaoqiu,job_welfare)
  24. #执行sql
  25. self.mycursor.execute(sql2)
  26. #提交
  27. self.conn.commit()
  28. except Exception as e:
  29. print(e)
  30. self.conn.rollback()
  31. return item
  32. def close_spider(self,spider):
  33. self.mycursor.close()
  34. self.conn.close()
  35. print('上海--python--第一页(90)...下载完毕...')

 

7. 看数据库是否有数据

成功。

 

转载于:https://www.cnblogs.com/wshr210/p/11351842.html

声明:本文内容由网友自发贡献,转载请注明出处:【wpsshop博客】
推荐阅读
相关标签
  

闽ICP备14008679号