当前位置:   article > 正文

学习爬虫之Scrapy框架学习(六)--1.直接使用scrapy;使用scrapy管道;使用scrapy的媒体管道类进行猫咪图片存储。媒体管道类学习。自建媒体管道类存储图片_%e7%8c%ab%e5%92%aa

%e7%8c%ab%e5%92%aa

1.引入:

先来看个小案例:使用scrapy爬取百度图片。(
目标百度图片URL:
https://image.baidu.com/search/index?tn=baiduimage&ipn=r&ct=201326592&cl=2&lm=-1&st=-1&sf=1&fmq=&pv=&ic=0&nc=1&z=&se=1&showtab=0&fb=0&width=&height=&face=0&istype=2&ie=utf-8&fm=index&pos=history&word=%E7%8C%AB%E5%92%AA)

(1)不使用管道,直接存储本地:

1.创建scrapy项目及爬虫文件
'''
终端依此输入:
1.scrapy startproject baiduimgs
2.cd baiduimgs
3.scrapy genspider bdimg www
'''
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
'
运行
2.编写爬虫文件:
# -*- coding: utf-8 -*-
import scrapy

import re
import os
class BdimgSpider(scrapy.Spider):
    name = 'bdimgs'
    allowed_domains = ['image.baidu.com']
    start_urls = ['https://image.baidu.com/search/index?tn=baiduimage&ipn=r&ct=201326592&cl=2&lm=-1&st=-1&sf=1&fmq=&pv=&ic=0&nc=1&z=&se=1&showtab=0&fb=0&width=&height=&face=0&istype=2&ie=utf-8&fm=index&pos=history&word=%E7%8C%AB%E5%92%AA']
    num=0

    def parse(self, response):
        text=response.text
        img_urls=re.findall('"thumbURL":"(.*?)"',text)
        for img_url in img_urls:
            yield scrapy.Request(img_url,dont_filter=True,callback=self.get_img)

    def get_img(self,response):
        img_data=response.body
        if not os.path.exists("dir"):
            os.mkdir("dir")
        filename="dir/%s.jpg"%self.num
        self.num+=1
        with open(filename,"wb") as f:
            f.write(img_data)

  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26

注意:
在settings.py文件中关闭robots协议;
加入UA!!!

3.效果:

在这里插入图片描述

(2)使用管道,进行本地存储:

1.编写爬虫文件:
# -*- coding: utf-8 -*-
import scrapy

import re
import os
from ..items import BaiduimgsItem	#引入创建字段的类
class BdimgSpider(scrapy.Spider):
    name = 'bdimgs'
    allowed_domains = ['image.baidu.com']
    start_urls = ['https://image.baidu.com/search/index?tn=baiduimage&ipn=r&ct=201326592&cl=2&lm=-1&st=-1&sf=1&fmq=&pv=&ic=0&nc=1&z=&se=1&showtab=0&fb=0&width=&height=&face=0&istype=2&ie=utf-8&fm=index&pos=history&word=%E7%8C%AB%E5%92%AA']
    num=0

    def parse(self, response):
        text=response.text
        img_urls=re.findall('"thumbURL":"(.*?)"',text)
        for img_url in img_urls:
            yield scrapy.Request(img_url,dont_filter=True,callback=self.get_img)

    def get_img(self,response):
        img_data=response.body

        item=BaiduimgsItem()
        item["img_data"]=img_data
        yield item

  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
2.在items.py文件中创建相应的字段:
# -*- coding: utf-8 -*-

# Define here the models for your scraped items
#
# See documentation in:
# https://docs.scrapy.org/en/latest/topics/items.html

import scrapy


class BaiduimgsItem(scrapy.Item):
    # define the fields for your item here like:
    # name = scrapy.Field()
    img_data=scrapy.Field()

  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
3.编写管道文件pipelines.py:
# -*- coding: utf-8 -*-

# Define your item pipelines here
#
# Don't forget to add your pipeline to the ITEM_PIPELINES setting
# See: https://docs.scrapy.org/en/latest/topics/item-pipeline.html

import os
class BaiduimgsPipeline(object):
    num=0
    def process_item(self, item, spider):
        if not os.path.exists("dir_pipe"):
            os.mkdir("dir_pipe")
        filename="dir_pipe/%s.jpg"%self.num
        self.num+=1
        img_data=item["img_data"]
        with open(filename,"wb") as f:
            f.write(img_data)
        return item

  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
'
运行

注意:要在settings.py文件中开启管道!!!

4.效果:

在这里插入图片描述

(3)但是,注意下两个存储方法下的我们所编写的爬虫文件:

其中:有个get_img()回调函数,前面学习可知回调函数必须有,但是我们仔细观察这两个爬虫文件,会发现里面的这个回调函数作用不大,我们的目标就直接是图片数据,而不需要再进行额外的一系列的提取,所以:这个回调函数明显累赘了,那么:有么有方法可以简化嘞!!!

这就引入了媒体管道类。使用如下:

1.爬虫文件改为:
# -*- coding: utf-8 -*-
import scrapy

import re
import os
from ..items import BaiduimgsPipeItem
class BdimgSpider(scrapy.Spider):
    name = 'bdimgs'
    allowed_domains = ['image.baidu.com']
    start_urls = ['https://image.baidu.com/search/index?tn=baiduimage&ipn=r&ct=201326592&cl=2&lm=-1&st=-1&sf=1&fmq=&pv=&ic=0&nc=1&z=&se=1&showtab=0&fb=0&width=&height=&face=0&istype=2&ie=utf-8&fm=index&pos=history&word=%E7%8C%AB%E5%92%AA']

    def parse(self, response):
        text=response.text
        image_urls=re.findall('"thumbURL":"(.*?)"',text)
        # 注意:此处给字段的值是图片的URL!!!
        item=BaiduimgsPipeItem()
        item["image_urls"]=image_urls
        yield item
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
2.编写items.py文件:(注意:使用媒体管道类的话,这个字段名必须是image_urls,因为源码中默认的字段名就是这个!!!)
# -*- coding: utf-8 -*-

# Define here the models for your scraped items
#
# See documentation in:
# https://docs.scrapy.org/en/latest/topics/items.html

import scrapy

class BaiduimgsPipeItem(scrapy.Item):
    # define the fields for your item here like:
    # name = scrapy.Field()
    image_urls=scrapy.Field()
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
3.注意:这样做的话,pipelines.py文件里就不再需要进行编写,直接在settings.py进行以下两步操作即可:(重点:表面上没有使用管道,因为咱pipelines.py文件没有进行任何操作,但是实际上由于咱使用了特定的字段名,在暗地里咱使用了媒体管道类!!!)
# Configure item pipelines
# See https://docs.scrapy.org/en/latest/topics/item-pipeline.html
ITEM_PIPELINES = {
   # 'baiduimgs.pipelines.BaiduimgsPipeline': 300,
   'scrapy.pipelines.images.ImagesPipeline': 300,          # 注意:一定要开启此pipeline管道!
}
# 注意:一定要指定媒体管道存储的路径!
IMAGES_STORE = r'C:/my/pycharm_work/爬虫/baiduimgs/baiduimgs/dir0'
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
'
运行
4.效果:

在这里插入图片描述

2.媒体管道类:

(源码的说明:
Abstract pipeline that implement the image thumbnail generation logic
实现图像缩略图生成逻辑的抽象管道

简言之:专门处理媒体文件图片生成的一套方法)

媒体管道都实现了以下特性:
1.避免重新下载最近下载的媒体
2.指定存储位置(文件系统目录,Amazon S3 bucket,谷歌云存储bucket)

3.图像管道具有一些额外的图像处理功能:
(1)将所有下载的图片转换为通用格式(JPG)和模式(RGB)
(2)生成缩略图
(3)检查图像的宽度/高度,进行最小尺寸过滤

(1)源码简单学习:

(from scrapy.pipelines.images import ImagesPipeline这样子查看源码!!!)

"""
Images Pipeline

See documentation in topics/media-pipeline.rst
"""
import functools
import hashlib
from io import BytesIO

from PIL import Image

from scrapy.utils.misc import md5sum
from scrapy.utils.python import to_bytes
from scrapy.http import Request
from scrapy.settings import Settings
from scrapy.exceptions import DropItem
#TODO: from scrapy.pipelines.media import MediaPipeline
from scrapy.pipelines.files import FileException, FilesPipeline


class NoimagesDrop(DropItem):
    """Product with no images exception"""


class ImageException(FileException):
    """General image error exception"""


class ImagesPipeline(FilesPipeline):
    """Abstract pipeline that implement the image thumbnail generation logic
            实现图像缩略图生成逻辑的抽象管道
    """

    MEDIA_NAME = 'image'

    # Uppercase attributes kept for backward compatibility with code that subclasses
    # ImagesPipeline. They may be overridden by settings.
    MIN_WIDTH = 0
    MIN_HEIGHT = 0
    EXPIRES = 90
    THUMBS = {}
    DEFAULT_IMAGES_URLS_FIELD = 'image_urls'
    DEFAULT_IMAGES_RESULT_FIELD = 'images'

    def __init__(self, store_uri, download_func=None, settings=None):
        super(ImagesPipeline, self).__init__(store_uri, settings=settings,
                                             download_func=download_func)

        # 解析settings.py里的配置字段
        if isinstance(settings, dict) or settings is None:
            settings = Settings(settings)

        resolve = functools.partial(self._key_for_pipe,
                                    base_class_name="ImagesPipeline",
                                    settings=settings)
        self.expires = settings.getint(
            resolve("IMAGES_EXPIRES"), self.EXPIRES
        )

        if not hasattr(self, "IMAGES_RESULT_FIELD"):
            self.IMAGES_RESULT_FIELD = self.DEFAULT_IMAGES_RESULT_FIELD
        if not hasattr(self, "IMAGES_URLS_FIELD"):
            self.IMAGES_URLS_FIELD = self.DEFAULT_IMAGES_URLS_FIELD

        self.images_urls_field = settings.get(
            resolve('IMAGES_URLS_FIELD'),
            self.IMAGES_URLS_FIELD
        )
        self.images_result_field = settings.get(
            resolve('IMAGES_RESULT_FIELD'),
            self.IMAGES_RESULT_FIELD
        )
        self.min_width = settings.getint(
            resolve('IMAGES_MIN_WIDTH'), self.MIN_WIDTH
        )
        self.min_height = settings.getint(
            resolve('IMAGES_MIN_HEIGHT'), self.MIN_HEIGHT
        )
        self.thumbs = settings.get(
            resolve('IMAGES_THUMBS'), self.THUMBS
        )

    @classmethod
    def from_settings(cls, settings):
        s3store = cls.STORE_SCHEMES['s3']
        s3store.AWS_ACCESS_KEY_ID = settings['AWS_ACCESS_KEY_ID']
        s3store.AWS_SECRET_ACCESS_KEY = settings['AWS_SECRET_ACCESS_KEY']
        s3store.AWS_ENDPOINT_URL = settings['AWS_ENDPOINT_URL']
        s3store.AWS_REGION_NAME = settings['AWS_REGION_NAME']
        s3store.AWS_USE_SSL = settings['AWS_USE_SSL']
        s3store.AWS_VERIFY = settings['AWS_VERIFY']
        s3store.POLICY = settings['IMAGES_STORE_S3_ACL']

        gcs_store = cls.STORE_SCHEMES['gs']
        gcs_store.GCS_PROJECT_ID = settings['GCS_PROJECT_ID']
        gcs_store.POLICY = settings['IMAGES_STORE_GCS_ACL'] or None

        ftp_store = cls.STORE_SCHEMES['ftp']
        ftp_store.FTP_USERNAME = settings['FTP_USER']
        ftp_store.FTP_PASSWORD = settings['FTP_PASSWORD']
        ftp_store.USE_ACTIVE_MODE = settings.getbool('FEED_STORAGE_FTP_ACTIVE')

        store_uri = settings['IMAGES_STORE']
        return cls(store_uri, settings=settings)

    # 图片下载
    def file_downloaded(self, response, request, info):
        return self.image_downloaded(response, request, info)

    def image_downloaded(self, response, request, info):
        checksum = None
        for path, image, buf in self.get_images(response, request, info):
            if checksum is None:
                buf.seek(0)
                checksum = md5sum(buf)
            width, height = image.size
            self.store.persist_file(
                path, buf, info,
                meta={'width': width, 'height': height},
                headers={'Content-Type': 'image/jpeg'})
        return checksum

    # 图片获取       包含了图片大小的过滤,缩略图的生成!
    def get_images(self, response, request, info):
        path = self.file_path(request, response=response, info=info)
        orig_image = Image.open(BytesIO(response.body))

        width, height = orig_image.size
        if width < self.min_width or height < self.min_height:     #图片大小的过滤
            raise ImageException("Image too small (%dx%d < %dx%d)" %
                                 (width, height, self.min_width, self.min_height))

        image, buf = self.convert_image(orig_image)
        yield path, image, buf

        for thumb_id, size in self.thumbs.items():  #缩略图
            thumb_path = self.thumb_path(request, thumb_id, response=response, info=info)
            thumb_image, thumb_buf = self.convert_image(image, size)
            yield thumb_path, thumb_image, thumb_buf

    def convert_image(self, image, size=None):   #转换成通用格式
        if image.format == 'PNG' and image.mode == 'RGBA':
            background = Image.new('RGBA', image.size, (255, 255, 255))
            background.paste(image, image)
            image = background.convert('RGB')
        elif image.mode == 'P':
            image = image.convert("RGBA")
            background = Image.new('RGBA', image.size, (255, 255, 255))
            background.paste(image, image)
            image = background.convert('RGB')
        elif image.mode != 'RGB':
            image = image.convert('RGB')

        if size:
            image = image.copy()
            image.thumbnail(size, Image.ANTIALIAS)

        buf = BytesIO()
        image.save(buf, 'JPEG')
        return image, buf

    def get_media_requests(self, item, info):       #可以用来重写
        #将图片的URL变成请求发给引擎
        '''
        req_list=[]
        for x in item.get(self.images_urls_field, []):      #本句相当于:item["images_urls"]得到图片URL列表
            req_list.append(Request(x))
        return req_list
        '''
        return [Request(x) for x in item.get(self.images_urls_field, [])]

    def item_completed(self, results, item, info):      #此方法获取到了返回的结果(即上面get_media_requests方法的返回值),同时可以获取文件名     也可以重写
        if isinstance(item, dict) or self.images_result_field in item.fields:
            item[self.images_result_field] = [x for ok, x in results if ok]
        return item

    def file_path(self, request, response=None, info=None):
        image_guid = hashlib.sha1(to_bytes(request.url)).hexdigest()
        #将url进行hash加密    url是唯一的。所以图片名字是唯一的
        return 'full/%s.jpg' % (image_guid)

    def thumb_path(self, request, thumb_id, response=None, info=None):
        #缩略图的存储路径
        thumb_guid = hashlib.sha1(to_bytes(request.url)).hexdigest()
        return 'thumbs/%s/%s.jpg' % (thumb_id, thumb_guid)


  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37
  • 38
  • 39
  • 40
  • 41
  • 42
  • 43
  • 44
  • 45
  • 46
  • 47
  • 48
  • 49
  • 50
  • 51
  • 52
  • 53
  • 54
  • 55
  • 56
  • 57
  • 58
  • 59
  • 60
  • 61
  • 62
  • 63
  • 64
  • 65
  • 66
  • 67
  • 68
  • 69
  • 70
  • 71
  • 72
  • 73
  • 74
  • 75
  • 76
  • 77
  • 78
  • 79
  • 80
  • 81
  • 82
  • 83
  • 84
  • 85
  • 86
  • 87
  • 88
  • 89
  • 90
  • 91
  • 92
  • 93
  • 94
  • 95
  • 96
  • 97
  • 98
  • 99
  • 100
  • 101
  • 102
  • 103
  • 104
  • 105
  • 106
  • 107
  • 108
  • 109
  • 110
  • 111
  • 112
  • 113
  • 114
  • 115
  • 116
  • 117
  • 118
  • 119
  • 120
  • 121
  • 122
  • 123
  • 124
  • 125
  • 126
  • 127
  • 128
  • 129
  • 130
  • 131
  • 132
  • 133
  • 134
  • 135
  • 136
  • 137
  • 138
  • 139
  • 140
  • 141
  • 142
  • 143
  • 144
  • 145
  • 146
  • 147
  • 148
  • 149
  • 150
  • 151
  • 152
  • 153
  • 154
  • 155
  • 156
  • 157
  • 158
  • 159
  • 160
  • 161
  • 162
  • 163
  • 164
  • 165
  • 166
  • 167
  • 168
  • 169
  • 170
  • 171
  • 172
  • 173
  • 174
  • 175
  • 176
  • 177
  • 178
  • 179
  • 180
  • 181
  • 182
  • 183
  • 184
  • 185
  • 186
  • 187

(2)咱自己搞个媒体管道进行图片存储(重写框架自带媒体管道类的部分方法实现图片保存名字的自定义!):

1.spider文件中要拿到图片列表并yield item
2.item里需要定义特殊的字段名:image_urls=scrapy.Field()
3.settings里设置IMAGES_STORE存储路径,如果路径不存在,系统会帮助我们创建
4.使用默认管道则在settings.py文件中开启:scrapy.pipelines.images.ImagesPipeline: 60,
自建管道需要继承ImagesPipeline并在settings.py中开启相应的管道
5.可根据官方文档进行重写:
get_media_requests
item_completed

1.爬虫文件:
# -*- coding: utf-8 -*-
import scrapy

import re
from ..items import BaiduimgPipeItem
import os
class BdimgSpider(scrapy.Spider):
    name = 'bdimgpipe'
    allowed_domains = ['image.baidu.com']
    start_urls = ['https://image.baidu.com/search/index?tn=baiduimage&ipn=r&ct=201326592&cl=2&lm=-1&st=-1&sf=1&fmq=&pv=&ic=0&nc=1&z=&se=1&showtab=0&fb=0&width=&height=&face=0&istype=2&ie=utf-8&fm=index&pos=history&word=%E7%8C%AB%E5%92%AA']

    def parse(self, response):
        text=response.text
        image_urls=re.findall('"thumbURL":"(.*?)"',text)
        item=BaiduimgPipeItem()
        item["image_urls"]=image_urls
        yield item

  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
2.items.py文件中设置特殊的字段名:
# -*- coding: utf-8 -*-

# Define here the models for your scraped items
#
# See documentation in:
# https://docs.scrapy.org/en/latest/topics/items.html

import scrapy


class BaiduimgPipeItem(scrapy.Item):
    # define the fields for your item here like:
    # name = scrapy.Field()
    image_urls=scrapy.Field()

  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
3.settings.py文件中开启自建管道并设置文件存储路径:
# Configure item pipelines
# See https://docs.scrapy.org/en/latest/topics/item-pipeline.html
ITEM_PIPELINES = {
   # 'baiduimg.pipelines.BaiduimgPipeline': 300,
   'baiduimg.pipelines.BdImagePipeline': 40,
   # 'scrapy.pipelines.images.ImagesPipeline': 60,
}

# IMAGES_STORE =r'C:\my\pycharm_work\爬虫\eight_class\baiduimg\baiduimg\dir0'
IMAGES_STORE ='C:/my/pycharm_work/爬虫/eight_class_ImagesPipeline/baiduimg/baiduimg/dir3'
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
'
运行
4.编写pipelines.py
# -*- coding: utf-8 -*-

# Define your item pipelines here
#
# Don't forget to add your pipeline to the ITEM_PIPELINES setting
# See: https://docs.scrapy.org/en/latest/topics/item-pipeline.html

from scrapy.http import Request
import os

from scrapy.pipelines.images import ImagesPipeline        # 导入要使用的媒体管道类
from .settings import IMAGES_STORE
class BdImagePipeline(ImagesPipeline):
    image_num = 0
    print("spider的媒体管道类")
    def get_media_requests(self, item, info):  # 可以用来重写
        # 将图片的URL变成请求发给引擎
        '''
        req_list=[]
        for x in item.get(self.images_urls_field, []):      #本句相当于:item["images_urls"]得到图片URL列表
            req_list.append(Request(x))
        return req_list
        '''
        return [Request(x) for x in item.get(self.images_urls_field, [])]

    def item_completed(self, results, item, info):
        images_path = [x["path"] for ok,x in results if ok]

        for image_path in images_path:  # 通过os的方法rename实现图片保存名字的自定义!第一个参数为图片原路径;第二个参数为图片自定义路径
            os.rename(IMAGES_STORE+"/"+image_path,IMAGES_STORE+"/"+str(self.image_num)+".jpg")      # IMAGES_STORE+"/"+image_path是图片保存的原绝对路径;第二个参数是自定义的图片保存的新绝对路径(此处也放在IMAGES_STORE路径下)
            self.image_num+=1


'''
源码中一个可重写的方法:
    def item_completed(self, results, item, info):      #此方法也可以重写
        if isinstance(item, dict) or self.images_result_field in item.fields:
            item[self.images_result_field] = [x for ok, x in results if ok]
        return item

results详解:
url-从中下载文件的网址。这是从get_media_requests() 方法返回的请求的URL 。

path- FILES_STORE文件存储的路径(相对于)

checksum- 图片内容的MD5哈希

这是该results参数的典型值:
[(True,
  {'checksum': '2b00042f7481c7b056c4b410d28f33cf',
   'path': 'full/0a79c461a4062ac383dc4fade7bc09f1384a3910.jpg',
   'url': 'http://www.example.com/files/product1.pdf'}),
]

而上面的方法item_completed()就是处理此results的,所以解读源码:
[x for ok, x in results if ok]
可知此列表推导式获取的是results中整个字典的值,然后赋给item再返回!

依此得出思路,可通过下面列表推导式获取results中图片存储的路径:
images_path = [x["path"] for ok,x in results if ok]

'''
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37
  • 38
  • 39
  • 40
  • 41
  • 42
  • 43
  • 44
  • 45
  • 46
  • 47
  • 48
  • 49
  • 50
  • 51
  • 52
  • 53
  • 54
  • 55
  • 56
  • 57
  • 58
  • 59
  • 60
  • 61
  • 62
5.观察可发现完美实现:

在这里插入图片描述

3.如果你感觉就这三十张可爱的猫咪的图片不够,那咱就把爬虫文件改下,想要多少就要多少!!!注意:settings.py文件中设置个下载延迟哦!不然会有被封的危险。

# -*- coding: utf-8 -*-
import scrapy

import re
from ..items import BaiduimgItem,BaiduimgPipeItem
import os
class BdimgSpider(scrapy.Spider):
    name = 'bdimgpipe'
    allowed_domains = ['image.baidu.com']
    start_urls = ['https://image.baidu.com/search/index?tn=baiduimage&ipn=r&ct=201326592&cl=2&lm=-1&st=-1&sf=1&fmq=&pv=&ic=0&nc=1&z=&se=1&showtab=0&fb=0&width=&height=&face=0&istype=2&ie=utf-8&fm=index&pos=history&word=%E7%8C%AB%E5%92%AA']
    page_url="https://image.baidu.com/search/acjson?tn=resultjson_com&ipn=rj&ct=201326592&is=&fp=result&queryWord=%E7%8C%AB%E5%92%AA&cl=2&lm=-1&ie=utf-8&oe=utf-8&adpicid=&st=-1&z=&ic=0&hd=&latest=&copyright=&word=%E7%8C%AB%E5%92%AA&s=&se=&tab=&width=&height=&face=0&istype=2&qc=&nc=1&fr=&expermode=&force=&pn={}&rn=30&gsm=1e&1588088573059="
    num = 0
    pagenum=1

    def parse(self, response):
        text=response.text
        image_urls=re.findall('"thumbURL":"(.*?)"',text)
        item=BaiduimgPipeItem()
        item["image_urls"]=image_urls
        yield item

        url=self.page_url.format(self.pagenum*30)
        self.pagenum+=1
        if self.pagenum == 3:			#想要多少就设置多少为中断!!!
            return
        yield scrapy.Request(url, callback=self.parse)
'''
F12观察原网页,每次加载更多图片,找到对应的URL,观察规律,发现其中pn参数的值随着页面的加载逐页增加30!!!
https://image.baidu.com/search/acjson?tn=resultjson_com&ipn=rj&ct=201326592&is=&fp=result&queryWord=%E7%8C%AB%E5%92%AA&cl=2&lm=-1&ie=utf-8&oe=utf-8&adpicid=&st=-1&z=&ic=0&hd=&latest=&copyright=&word=%E7%8C%AB%E5%92%AA&s=&se=&tab=&width=&height=&face=0&istype=2&qc=&nc=1&fr=&expermode=&force=&pn=30&rn=30&gsm=1e&1588088573059=
https://image.baidu.com/search/acjson?tn=resultjson_com&ipn=rj&ct=201326592&is=&fp=result&queryWord=%E7%8C%AB%E5%92%AA&cl=2&lm=-1&ie=utf-8&oe=utf-8&adpicid=&st=-1&z=&ic=0&hd=&latest=&copyright=&word=%E7%8C%AB%E5%92%AA&s=&se=&tab=&width=&height=&face=0&istype=2&qc=&nc=1&fr=&expermode=&force=&pn=60&rn=30&gsm=3c&1588088573138=
'''
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
效果非常nice:

在这里插入图片描述

拓展:媒体管道的一些设置:

ITEM_PIPELINES = {‘scrapy.pipelines.images.ImagesPipeline’: 1} 启用
FILES_STORE = ‘/path/to/valid/dir’ 文件管道存放位置
IMAGES_STORE = ‘/path/to/valid/dir’ 图片管道存放位置
FILES_URLS_FIELD = ‘field_name_for_your_files_urls’ 自定义文件url字段
FILES_RESULT_FIELD = ‘field_name_for_your_processed_files’ 自定义结果字段
IMAGES_URLS_FIELD = ‘field_name_for_your_images_urls’ 自定义图片url字段
IMAGES_RESULT_FIELD = ‘field_name_for_your_processed_images’ 结果字段
FILES_EXPIRES = 90 文件过期时间 默认90天
IMAGES_EXPIRES = 90 图片过期时间 默认90天
IMAGES_THUMBS = {‘small’: (50, 50), ‘big’:(270, 270)} 缩略图尺寸 # !!!直接在settings.py中写入此设置,再运行框架就会生成缩略图!!!十分方便,常用!!!
IMAGES_MIN_HEIGHT = 110 过滤最小高度
IMAGES_MIN_WIDTH = 110 过滤最小宽度
MEDIA_ALLOW_REDIRECTS = True 是否重定向

声明:本文内容由网友自发贡献,转载请注明出处:【wpsshop博客】
推荐阅读
相关标签
  

闽ICP备14008679号