Ich möchte this website crawlen. Ich habe eine Spinne geschrieben, aber es kriecht nur die Titelseite, d. H. Die Top 52 Artikel.Wie scrappe website mit infinte scrolling?
Ich habe diesen Code versucht:
from scrapy.spider import BaseSpider
from scrapy.selector import HtmlXPathSelector
from scrapy.http import Request
a=[]
from aqaq.items import aqaqItem
import os
import urlparse
import ast
class aqaqspider(BaseSpider):
name = "jabong"
allowed_domains = ["jabong.com"]
start_urls = [
"http://www.jabong.com/women/clothing/womens-tops/",
]
def parse(self, response):
# ... Extract items in the page using extractors
n=3
ct=1
hxs = HtmlXPathSelector(response)
sites=hxs.select('//div[@id="page"]')
for site in sites:
name=site.select('//div[@id="content"]/div[@class="l-pageWrapper"]/div[@class="l-main"]/div[@class="box box-bgcolor"]/section[@class="box-bd pan mtm"]/ul[@id="productsCatalog"]/li/a/@href').extract()
print name
print ct
ct=ct+1
a.append(name)
req= Request (url="http://www.jabong.com/women/clothing/womens-tops/?page=" + str(n) ,
headers = {"Referer": "http://www.jabong.com/women/clothing/womens-tops/",
"X-Requested-With": "XMLHttpRequest"},callback=self.parse,dont_filter=True)
return req # and your items
Es zeigt folgende Ausgabe:
2013-10-31 09:22:42-0500 [jabong] DEBUG: Crawled (200) <GET http://www.jabong.com/women/clothing/womens-tops/?page=3> (referer: http://www.jabong.com/women/clothing/womens-tops/)
2013-10-31 09:22:42-0500 [jabong] DEBUG: Filtered duplicate request: <GET http://www.jabong.com/women/clothing/womens-tops/?page=3> - no more duplicates will be shown (see DUPEFILTER_CLASS)
2013-10-31 09:22:42-0500 [jabong] INFO: Closing spider (finished)
2013-10-31 09:22:42-0500 [jabong] INFO: Dumping Scrapy stats:
Als ich dont_filter=True
legte es wird nie aufhören.
tat u die Lösung bekam? –
Nein, ich habe keine Lösung. – user2823667