当前位置:   article > 正文

菜鸟写Python-Scrapy:Spider源码分析_valueerror: spider must have a name

valueerror: spider must have a name

蜘蛛:

一,前言:

在scrapy中蜘蛛定义了爬取方法(请求&解析)以及爬取某个(或某些)网页(URL)的一些操作。

生成一个蜘蛛项目的方法,执行cmd命令:scrapy genspider lagou www.lagou.com(scrapy genspider项目名域名)

温馨提示:在生成的蜘蛛时,其实有4中模版,如如果不特指就默认为基本,就像上面的genspider一样没有指定则默认使用基本的,它还有三个模版为:crawlspider,csvfeedspider和xmlfeedspider。

其中蜘蛛为最基本的类,所有蜘蛛模版都要继承它(基本就是直接继承蜘蛛类)

蜘蛛类主要用到的函数及调用顺序为:

二,分析:

一个蜘蛛类主要用到的函数方法和调用顺序为:

1)初始化():初始化爬虫名字和start_urls列表。

  1. def __init__(self, name=None, **kwargs):
  2. if name is not None:
  3. self.name = name
  4. elif not getattr(self, 'name', None):
  5. raise ValueError("%s must have a name" % type(self).__name__)
  6. self.__dict__.update(kwargs)
  7. if not hasattr(self, 'start_urls'):
  8. self.start_urls = []

2)start_requests(),蜘蛛发起请求的开始,它执行会调用make_requests_from_url(),生成请求对象交给Scrapy下载并返回响应给具体的处理函数处理

  1. def start_requests(self):
  2. cls = self.__class__
  3. if method_is_overridden(cls, Spider, 'make_requests_from_url'):
  4. warnings.warn(
  5. "Spider.make_requests_from_url method is deprecated; it "
  6. "won't be called in future Scrapy releases. Please "
  7. "override Spider.start_requests method instead (see %s.%s)." % (
  8. cls.__module__, cls.__name__
  9. ),
  10. )
  11. for url in self.start_urls:
  12. yield self.make_requests_from_url(url)
  13. else:
  14. for url in self.start_urls:
  15. yield Request(url, dont_filter=True)
  16. def make_requests_from_url(self, url):
  17. """ This method is deprecated. """
  18. return Request(url, dont_filter=True)

3)parse(),具体处理响应响应的处理函数(即分析页面),其中在start_request()中,如果不知道callback则默认调用parse()。同时该函数可以自行定义,调用时使用callback回调即可,记得回调参数使用的是函数地址和不是函数方法,即只需要给函数方法名,不要带()。

提示:这个方法要根据我们自己的爬取需求自己设计,解析响应完成之后,并返回项目或请求(需指定回调函数).Item传给项目和管道处理,请求交由Scrapy下载,并由指定的回调函数处理,一直进行循环,直到处理完所有的数据为止。

蜘蛛源码:

  1. """
  2. Base class for Scrapy spiders
  3. See documentation in docs/topics/spiders.rst
  4. """
  5. import logging
  6. import warnings
  7. from scrapy import signals
  8. from scrapy.http import Request
  9. from scrapy.utils.trackref import object_ref
  10. from scrapy.utils.url import url_is_from_spider
  11. from scrapy.utils.deprecate import create_deprecated_class
  12. from scrapy.exceptions import ScrapyDeprecationWarning
  13. from scrapy.utils.deprecate import method_is_overridden
  14. #所有爬虫的基类,用户定义的爬虫必须从这个类继承
  15. class Spider(object_ref):
  16. """Base class for scrapy spiders. All spiders must inherit from this
  17. class.
  18. """
  19. #1、定义spider名字的字符串。spider的名字定义了Scrapy如何定位(并初始化)spider,所以其必须是唯一的。
  20. #2、name是spider最重要的属性,而且是必须的。一般做法是以该网站的域名来命名spider。例如我们在爬取豆瓣读书爬虫时使用‘name = "douban_book_spider"’
  21. name = None
  22. custom_settings = None
  23. #初始化爬虫名字和start_urls列表。上面已经提到。
  24. def __init__(self, name=None, **kwargs):
  25. #初始化爬虫名字
  26. if name is not None:
  27. self.name = name
  28. elif not getattr(self, 'name', None):
  29. raise ValueError("%s must have a name" % type(self).__name__)
  30. self.__dict__.update(kwargs)
  31. #初始化start_urls列表,当没有指定的URL时,spider将从该列表中开始进行爬取。 因此,第一个被获取到的页面的URL将是该列表之一,后续的URL将会从获取到的数据中提取。
  32. if not hasattr(self, 'start_urls'):
  33. self.start_urls = []
  34. @property
  35. def logger(self):
  36. logger = logging.getLogger(self.name)
  37. return logging.LoggerAdapter(logger, {'spider': self})
  38. def log(self, message, level=logging.DEBUG, **kw):
  39. """Log the given message at the given log level
  40. This helper wraps a log call to the logger within the spider, but you
  41. can use it directly (e.g. Spider.logger.info('msg')) or use any other
  42. Python logger too.
  43. """
  44. self.logger.log(level, message, **kw)
  45. @classmethod
  46. def from_crawler(cls, crawler, *args, **kwargs):
  47. spider = cls(*args, **kwargs)
  48. spider._set_crawler(crawler)
  49. return spider
  50. def set_crawler(self, crawler):
  51. warnings.warn("set_crawler is deprecated, instantiate and bound the "
  52. "spider to this crawler with from_crawler method "
  53. "instead.",
  54. category=ScrapyDeprecationWarning, stacklevel=2)
  55. assert not hasattr(self, 'crawler'), "Spider already bounded to a " \
  56. "crawler"
  57. self._set_crawler(crawler)
  58. def _set_crawler(self, crawler):
  59. self.crawler = crawler
  60. self.settings = crawler.settings
  61. crawler.signals.connect(self.close, signals.spider_closed)
  62. #该方法将读取start_urls列表内的地址,为每一个地址生成一个Request对象,并返回这些对象的迭代器。
  63. #注意:该方法只会调用一次。
  64. def start_requests(self):
  65. cls = self.__class__
  66. if method_is_overridden(cls, Spider, 'make_requests_from_url'):
  67. warnings.warn(
  68. "Spider.make_requests_from_url method is deprecated; it "
  69. "won't be called in future Scrapy releases. Please "
  70. "override Spider.start_requests method instead (see %s.%s)." % (
  71. cls.__module__, cls.__name__
  72. ),
  73. )
  74. for url in self.start_urls:
  75. yield self.make_requests_from_url(url)
  76. else:
  77. for url in self.start_urls:
  78. yield Request(url, dont_filter=True)
  79. #1、start_requests()中调用,实际生成Request的函数。
  80. #2、Request对象默认的回调函数为parse(),提交的方式为get。
  81. def make_requests_from_url(self, url):
  82. """ This method is deprecated. """
  83. return Request(url, dont_filter=True)
  84. #默认的Request对象回调函数,处理返回的response。
  85. #生成Item或者Request对象。这个类需要我们自己去实现。
  86. def parse(self, response):
  87. raise NotImplementedError
  88. @classmethod
  89. def update_settings(cls, settings):
  90. settings.setdict(cls.custom_settings or {}, priority='spider')
  91. @classmethod
  92. def handles_request(cls, request):
  93. return url_is_from_spider(request.url, cls)
  94. @staticmethod
  95. def close(spider, reason):
  96. closed = getattr(spider, 'closed', None)
  97. if callable(closed):
  98. return closed(reason)
  99. def __str__(self):
  100. return "<%s %r at 0x%0x>" % (type(self).__name__, self.name, id(self))
  101. __repr__ = __str__
  102. BaseSpider = create_deprecated_class('BaseSpider', Spider)
  103. class ObsoleteClass(object):
  104. def __init__(self, message):
  105. self.message = message
  106. def __getattr__(self, name):
  107. raise AttributeError(self.message)
  108. spiders = ObsoleteClass(
  109. '"from scrapy.spider import spiders" no longer works - use '
  110. '"from scrapy.spiderloader import SpiderLoader" and instantiate '
  111. 'it with your project settings"'
  112. )
  113. # Top-level imports
  114. from scrapy.spiders.crawl import CrawlSpider, Rule
  115. from scrapy.spiders.feed import XMLFeedSpider, CSVFeedSpider
  116. from scrapy.spiders.sitemap import SitemapSpider
  117. 作者:小怪聊职场
  118. 链接:https://www.jianshu.com/p/d492adf17312
  119. 來源:简书
  120. 简书著作权归作者所有,任何形式的转载都请联系作者获得授权并注明出处。

实战项目: scrapy爬取知名问答网站(解决登录+保存cookies值+爬取问答数据) - 完整版完美解决登录问题

声明:本文内容由网友自发贡献,不代表【wpsshop博客】立场,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:https://www.wpsshop.cn/w/从前慢现在也慢/article/detail/764178
推荐阅读
相关标签
  

闽ICP备14008679号