URL ich zu kratzen versuchen: https://www.uvic.ca/BAN1P/bwckschd.p_disp_dyn_schedAusgabe immer vorbei an einem HTML-Formular mit Scrapy
Es gibt 3 Seiten insgesamt, erste Seite wählen Sie den Begriff, zweite Seite wählen Sie ein Thema, und die Seite mit der tatsächlichen Kursinformationen.
Das Problem, das ich in laufen lasse, ist, dass einmal Subjekt() die Kurse() aufruft die HTML im response.body Rückruf, der in die Datei geschrieben wird, ist die HTML-Seite des Themas statt der Kurse Seite. Wie kann ich feststellen, dass ich die korrekten Formulardaten sende, damit ich die richtige Antwort erhalte?
# term():
# Selects the school term to use. Clicks submit
def term(self, response):
return scrapy.FormRequest.from_response(
response,
formxpath="/html/body/div[3]/form",
formdata={
"p_term" : "201705" },
clickdata = { "type": "submit" },
callback=self.subject
)
# subject():
# Selects the subject to query. Clicks submit
def subject(self, response):
return scrapy.FormRequest.from_response(
response,
formxpath="/html/body/div[3]/form",
formdata={
"sel_subj" : "ART" },
clickdata = { "type": "submit" },
callback=self.courses
)
# courses():
# Currently just saves all the html on the page.
def courses(self, response):
page = response.url.split("/")[-1]
filename = 'uvic-%s.html' % page
with open(filename, 'wb') as f:
f.write(response.body)
self.log('Saved file %s' % filename)
Debug-Ausgabe
2017-04-02 01:15:28 [scrapy.utils.log] INFO: Scrapy 1.3.3 started (bot: scrapy4uvic)
2017-04-02 01:15:28 [scrapy.utils.log] INFO: Overridden settings: {'NEWSPIDER_MODULE': 'scrapy4uvic.spiders', 'SPIDER_MODULES': ['scrapy4uvic.spiders'], 'ROBOTSTXT_OBEY': True, 'BOT_NAME': 'scrapy4uvic'}
2017-04-02 01:15:28 [scrapy.middleware] INFO: Enabled extensions:
['scrapy.extensions.logstats.LogStats',
'scrapy.extensions.telnet.TelnetConsole',
'scrapy.extensions.corestats.CoreStats']
2017-04-02 01:15:28 [scrapy.middleware] INFO: Enabled downloader middlewares:
['scrapy.downloadermiddlewares.robotstxt.RobotsTxtMiddleware',
'scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware',
'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware',
'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware',
'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware',
'scrapy.downloadermiddlewares.retry.RetryMiddleware',
'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware',
'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware',
'scrapy.downloadermiddlewares.redirect.RedirectMiddleware',
'scrapy.downloadermiddlewares.cookies.CookiesMiddleware',
'scrapy.downloadermiddlewares.stats.DownloaderStats']
2017-04-02 01:15:28 [scrapy.middleware] INFO: Enabled spider middlewares:
['scrapy.spidermiddlewares.httperror.HttpErrorMiddleware',
'scrapy.spidermiddlewares.offsite.OffsiteMiddleware',
'scrapy.spidermiddlewares.referer.RefererMiddleware',
'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware',
'scrapy.spidermiddlewares.depth.DepthMiddleware']
2017-04-02 01:15:28 [scrapy.middleware] INFO: Enabled item pipelines:
[]
2017-04-02 01:15:28 [scrapy.core.engine] INFO: Spider opened
2017-04-02 01:15:28 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2017-04-02 01:15:28 [scrapy.extensions.telnet] DEBUG: Telnet console listening on 127.0.0.1:6023
2017-04-02 01:15:29 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://www.uvic.ca/robots.txt> (referer: None)
2017-04-02 01:15:29 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://www.uvic.ca/BAN1P/bwckschd.p_disp_dyn_sched> (referer: None)
2017-04-02 01:15:29 [scrapy.core.engine] DEBUG: Crawled (200) <POST https://www.uvic.ca/BAN1P/bwckgens.p_proc_term_date> (referer: https://www.uvic.ca/BAN1P/bwckschd.p_disp_dyn_sched)
2017-04-02 01:15:29 [scrapy.core.engine] DEBUG: Crawled (200) <POST https://www.uvic.ca/BAN1P/bwckschd.p_get_crse_unsec> (referer: https://www.uvic.ca/BAN1P/bwckgens.p_proc_term_date)
2017-04-02 01:15:30 [uvic] DEBUG: Saved file uvic-bwckschd.p_get_crse_unsec.html
2017-04-02 01:15:30 [scrapy.core.engine] INFO: Closing spider (finished)
2017-04-02 01:15:30 [scrapy.statscollectors] INFO: Dumping Scrapy stats:
{'downloader/request_bytes': 2335,
'downloader/request_count': 4,
'downloader/request_method_count/GET': 2,
'downloader/request_method_count/POST': 2,
'downloader/response_bytes': 105499,
'downloader/response_count': 4,
'downloader/response_status_count/200': 4,
'finish_reason': 'finished',
'finish_time': datetime.datetime(2017, 4, 2, 8, 15, 30, 103536),
'log_count/DEBUG': 6,
'log_count/INFO': 7,
'request_depth_max': 2,
'response_received_count': 4,
'scheduler/dequeued': 3,
'scheduler/dequeued/memory': 3,
'scheduler/enqueued': 3,
'scheduler/enqueued/memory': 3,
'start_time': datetime.datetime(2017, 4, 2, 8, 15, 28, 987034)}
2017-04-02 01:15:30 [scrapy.core.engine] INFO: Spider closed (finished)