小编典典

如何使用Scrapy从网站上获取所有纯文本?

scrapy

渲染HTML之后,我希望从网站上看到所有文本。我正在使用Scrapy框架在Python中工作。有了xpath('//body//text()')我就可以了,但是有了HTML标记,我只想要文本。有什么解决办法吗?


阅读 842

收藏
2020-04-08

共1个答案

小编典典

最简单的选择是to 并且找到所有内容:extract //body//text()join

''.join(sel.select("//body//text()").extract()).strip()

这里sel是一个Selector实例。

另一种选择是使用nltkclean_html()

>>> import nltk
>>> html = """
... <div class="post-text" itemprop="description">
... 
...         <p>I would like to have all the text visible from a website, after the HTML is rendered. I'm working in Python with Scrapy framework.
... With <code>xpath('//body//text()')</code> I'm able to get it, but with the HTML tags, and I only want the text. Any solution for this? Thanks !</p>
... 
...     </div>"""
>>> nltk.clean_html(html)
"I would like to have all the text visible from a website, after the HTML is rendered. I'm working in Python with Scrapy framework.\nWith xpath('//body//text()') I'm able to get it, but with the HTML tags, and I only want the text. Any solution for this? Thanks !"

另一种选择是使用BeautifulSoup的get_text():

get_text()

如果只需要文档或标签的文本部分,则可以使用该get_text()方法。它以单个Unicode字符串的形式返回文档中或标签下的所有文本。

>>> from bs4 import BeautifulSoup
>>> soup = BeautifulSoup(html)
>>> print soup.get_text().strip()
I would like to have all the text visible from a website, after the HTML is rendered. I'm working in Python with Scrapy framework.
With xpath('//body//text()') I'm able to get it, but with the HTML tags, and I only want the text. Any solution for this? Thanks !

另一种选择是使用lxml.html的text_content():

.text_content()

返回元素的文本内容,包括其子元素的文本内容,不带标记。

>>> import lxml.html
>>> tree = lxml.html.fromstring(html)
>>> print tree.text_content().strip()
I would like to have all the text visible from a website, after the HTML is rendered. I'm working in Python with Scrapy framework.
With xpath('//body//text()') I'm able to get it, but with the HTML tags, and I only want the text. Any solution for this? Thanks !
2020-04-08