我是新手,我知道这是一个了不起的爬虫框架!
在我的项目中,我发送了90,000个请求,但其中一些失败。我将日志级别设置为INFO,我只是可以看到一些统计信息,但没有详细信息。
2012-12-05 21:03:04+0800 [pd_spider] INFO: Dumping spider stats: {'downloader/exception_count': 1, 'downloader/exception_type_count/twisted.internet.error.ConnectionDone': 1, 'downloader/request_bytes': 46282582, 'downloader/request_count': 92383, 'downloader/request_method_count/GET': 92383, 'downloader/response_bytes': 123766459, 'downloader/response_count': 92382, 'downloader/response_status_count/200': 92382, 'finish_reason': 'finished', 'finish_time': datetime.datetime(2012, 12, 5, 13, 3, 4, 836000), 'item_scraped_count': 46191, 'request_depth_max': 1, 'scheduler/memory_enqueued': 92383, 'start_time': datetime.datetime(2012, 12, 5, 12, 23, 25, 427000)}
有什么方法可以获取更详细的报告吗?例如,显示那些失败的URL。谢谢!
是的,这是可能的。
failed_urls
from scrapy import Spider, signals class MySpider(Spider): handle_httpstatus_list = [404] name = "myspider" allowed_domains = ["example.com"] start_urls = [ 'http://www.example.com/thisurlexists.html', 'http://www.example.com/thisurldoesnotexist.html', 'http://www.example.com/neitherdoesthisone.html' ] def __init__(self, *args, **kwargs): super().__init__(*args, **kwargs) self.failed_urls = [] @classmethod def from_crawler(cls, crawler, *args, **kwargs): spider = super(MySpider, cls).from_crawler(crawler, *args, **kwargs) crawler.signals.connect(spider.handle_spider_closed, signals.spider_closed) return spider def parse(self, response): if response.status == 404: self.crawler.stats.inc_value('failed_url_count') self.failed_urls.append(response.url) def handle_spider_closed(self, reason): self.crawler.stats.set_value('failed_urls', ', '.join(self.failed_urls)) def process_exception(self, response, exception, spider): ex_class = "%s.%s" % (exception.__class__.__module__, exception.__class__.__name__) self.crawler.stats.inc_value('downloader/exception_count', spider=spider) self.crawler.stats.inc_value('downloader/exception_type_count/%s' % ex_class, spider=spider)
输出示例(请注意,仅当实际抛出异常时才会显示downloader / exception_count *统计信息-我在关闭无线适配器后尝试运行Spider来模拟它们):
2012-12-10 11:15:26+0000 [myspider] INFO: Dumping Scrapy stats: {'downloader/exception_count': 15, 'downloader/exception_type_count/twisted.internet.error.DNSLookupError': 15, 'downloader/request_bytes': 717, 'downloader/request_count': 3, 'downloader/request_method_count/GET': 3, 'downloader/response_bytes': 15209, 'downloader/response_count': 3, 'downloader/response_status_count/200': 1, 'downloader/response_status_count/404': 2, 'failed_url_count': 2, 'failed_urls': 'http://www.example.com/thisurldoesnotexist.html, http://www.example.com/neitherdoesthisone.html' 'finish_reason': 'finished', 'finish_time': datetime.datetime(2012, 12, 10, 11, 15, 26, 874000), 'log_count/DEBUG': 9, 'log_count/ERROR': 2, 'log_count/INFO': 4, 'response_received_count': 3, 'scheduler/dequeued': 3, 'scheduler/dequeued/memory': 3, 'scheduler/enqueued': 3, 'scheduler/enqueued/memory': 3, 'spider_exceptions/NameError': 2, 'start_time': datetime.datetime(2012, 12, 10, 11, 15, 26, 560000)}