lxml 通常都是用etree解析,书上却用的是lxml.html,用我浅薄的英文看了下,貌似etree功能更多,lxml.html专解析html,带了点特殊的方法....
etree
- from lxml import etree
- html = etree.HTML(sample)
- result1 = etree.tostring(html,pretty_print =True)
- print(result1)
lxml.html
- import lxml.html
- html = lxml.html.fromstring(sample)
- result2 = lxml.html.tostring(html,pretty_print =True)
- print(result2)
带编码的etree.HTML
- import lxml.etree as le
- with open(‘books.xml’,’r’,encoding=‘utf-8’) as b:
- contents=b.read()
- contents_html=le.HTML(contents.encode(‘utf-8’))
- co_ht_xpath=contents_html.xpath(‘/*’)
- print(co_ht_xpath)
得到的结果基本一致,etree结果多了个html、body标签,虽然我都加了pretty_print ,实际输出还是乱作一团,根本没有整齐输出的效果,网上好多人都是这样,还没找到解决方法,还是Beautiful Soup4的prettify() 好用
link[0].attrib
{'herf':'link1.html'}
使用attrib属性得到一个字典
link[0].attrib['herf'] 结果 和 html.xpath('//a/@href')[0] 一样
<wbr>
<wbr>
link[0].text
'aa'
link[0].text_content() <wbr> (etree 没有这个属性,lxml.html独有)
'aa'
<wbr>
lxml.html.diff可以比较2个文件的差别
lxml.html.clean
clean_html 用来清除未被解析文件中所有可疑?内容