Die Links Sie sind Bilder:
$ scrapy shell "https://en.wikipedia.org/wiki/Katy_Perry" -s USER_AGENT='Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/52.0.2743.82 Safari/537.36'
2016-08-19 11:17:05 [scrapy] INFO: Scrapy 1.1.2 started (bot: scrapybot)
(...)
2016-08-19 11:17:06 [scrapy] DEBUG: Crawled (200) <GET https://en.wikipedia.org/wiki/Katy_Perry> (referer: None)
(...)
In [1]: response.xpath('//a[@class="image"]/@href').extract()
Out[1]:
['/wiki/File:Katy_Perry_DNC_July_2016_(cropped).jpg',
'/wiki/File:Katy_Perry_performing.jpg',
'/wiki/File:Katy_Perry%E2%80%93Zenith_Paris.jpg',
'/wiki/File:PWT_Cropped.jpg',
'/wiki/File:Alanis_Morissette_5-19-2014.jpg',
'/wiki/File:Freddie_Mercury_performing_in_New_Haven,_CT,_November_1977.jpg',
'/wiki/File:Katy_Perry_California_Dreams_Tour_01.jpg',
'/wiki/File:Katy_Perry_UNICEF_2012.jpg',
'/wiki/File:Katy_Perry_Hillary_Clinton,_I%27m_With_Her_Concert.jpg',
'/wiki/File:Wikiquote-logo.svg',
'/wiki/File:Commons-logo.svg']
Und durch Extraktoren Standard-Link a lot of extensions filtern, einschließlich Bilder:
In [2]: from scrapy.linkextractors import LinkExtractor
In [3]: LinkExtractor(restrict_xpaths=('//a[@class="image"]')).extract_links(response)
Out[3]: []
Sie können use deny_extensions=[]
nichts filtern:
In [4]: LinkExtractor(restrict_xpaths=('//a[@class="image"]'), deny_extensions=[]).extract_links(response)
Out[4]:
[Link(url='https://en.wikipedia.org/wiki/File:Katy_Perry_DNC_July_2016_(cropped).jpg', text='', fragment='', nofollow=False),
Link(url='https://en.wikipedia.org/wiki/File:Katy_Perry_performing.jpg', text='', fragment='', nofollow=False),
Link(url='https://en.wikipedia.org/wiki/File:Katy_Perry%E2%80%93Zenith_Paris.jpg', text='', fragment='', nofollow=False),
Link(url='https://en.wikipedia.org/wiki/File:PWT_Cropped.jpg', text='', fragment='', nofollow=False),
Link(url='https://en.wikipedia.org/wiki/File:Alanis_Morissette_5-19-2014.jpg', text='', fragment='', nofollow=False),
Link(url='https://en.wikipedia.org/wiki/File:Freddie_Mercury_performing_in_New_Haven,_CT,_November_1977.jpg', text='', fragment='', nofollow=False),
Link(url='https://en.wikipedia.org/wiki/File:Katy_Perry_California_Dreams_Tour_01.jpg', text='', fragment='', nofollow=False),
Link(url='https://en.wikipedia.org/wiki/File:Katy_Perry_UNICEF_2012.jpg', text='', fragment='', nofollow=False),
Link(url="https://en.wikipedia.org/wiki/File:Katy_Perry_Hillary_Clinton,_I'm_With_Her_Concert.jpg", text='', fragment='', nofollow=False),
Link(url='https://en.wikipedia.org/wiki/File:Wikiquote-logo.svg', text='', fragment='', nofollow=False),
Link(url='https://en.wikipedia.org/wiki/File:Commons-logo.svg', text='', fragment='', nofollow=False)]
Sie würden etwas wie 'restrict_xpaths =' // td [@ colspan = "2"] ') 'wo die a nchors sind innerhalb –
Aber ich passierte '' Tags als XPaths vorher und alle von ihnen arbeiteten. Zum Beispiel in [reddit.com/r/pics](https://www.reddit.com/r/pics], um den 'href' in der nächsten Seite zu verwenden, benutzte ich' // a [@ rel = " Nofollow next "]' und es funktioniert, kriecht es auf den nächsten Seiten. Ich verstehe nicht, warum der ähnliche xpath '// a [@ class =" image "]' in diesem Beispiel nicht funktioniert –