上文中完成了对单页的数据的爬取。从URL中我们发现只是日期不相同,其他的信息都是相同的。也就是说只要取URL中固定的信息加上日期就可以爬取想要日期的比赛数据

https://trade.500.com/jczq/?date=2019-05-12 https://trade.500.com/jczq/?date=2019-05-13 https://trade.500.com/jczq/?date=2019-05-14 ' https://trade.500.com/jczq/?date='这一部分都是相同的,只是后面的日期不相同。 那么我们只要写一个能获取任意日期的,就能实现任意日期的数据爬取   import time import datetime import urllib.parse

def GetBetweenday(begin_date, domain):   date_list = []   url_list = []   begin_date = datetime.datetime.strptime(begin_date, "%Y-%m-%d")     end_date = datetime.datetime.strptime(time.strftime('%Y-%m-%d', time.localtime(time.time())), "%Y-%m-%d")   while begin_date <= end_date:     date_str = begin_date.strftime("%Y-%m-%d")     date_list.append(date_str)     begin_date += datetime.timedelta(days=1)   for i in date_list:     data = {        'date': i       }     url = urllib.parse.urlencode(data)     urls = domain + '?' + url     url_list.append(urls)   return url_list   datetime.datetime.strptime(begin_date, "%Y-%m-%d")  将字符串转化为日期格式 time.strftime('%Y-%m-%d', time.localtime(time.time())获取当前日期并转化为字符串 urllencode  接受参数形式为: [(key1, value1), (key2, value2),...] 和 {'key1': 'value1', 'key2': 'value2',...} 
返回的是形如 key2=value2&key1=value1字符串 例如:urllib.urlencode({ 'name': u'老王', 'sex': u'男'}) ' 返回结果:name=老王&sex=男 所以这里我们借用这个函数拼接日期   data = {        'date': i       }     url = urllib.parse.urlencode(data)     urls = domain + '?' + url   最后完整的为: import scrapy from ZuCai.items import ZucaiItem from ZuCai.spiders.get_date import GetBetweenday

class ZucaiSpider(scrapy.Spider):   name = 'zucai'   allowed_domains = ['trade.500.com/jczq/']   start_urls = ['https://trade.500.com/jczq/']
  def start_requests(self):     next_url = GetBetweenday('2019-04-15', 'https://trade.500.com/jczq/')     -----这里调用获取日期的函数,这里是获取2019-04-15到当前日期     for url in next_url:       yield scrapy.Request(url, callback=self.parse)     def parse(self, response):     datas = response.xpath('//div[@class="bet-main bet-main-dg"]/table/tbody/tr')     for data in datas:       item = ZucaiItem()       item['League'] = data.xpath('.//td[@class="td td-evt"]/a/text()').extract()[0]       item['Time'] = data.xpath('.//td[@class="td td-endtime"]/text()').extract()[0]       item['Home_team'] = data.xpath('.//span[@class="team-l"]/a/text()').extract()[0]       item['Result'] = data.xpath('.//i[@class="team-vs team-bf"]/a/text()').extract()[0]       item['Away_team'] = data.xpath('.//span[@class="team-r"]/a/text()').extract()[0]       item['Win'] = data.xpath('.//div[@class="betbtn-row itm-rangB1"]/p[1]/span/text()').extract()[0]       item['Level'] = data.xpath('.//div[@class="betbtn-row itm-rangB1"]/p[2]/span/text()').extract()[0]       item['Negative'] = data.xpath('.//div[@class="betbtn-row itm-rangB1"]/p[3]/span/text()').extract()[0]       yield item 执行过程中可能回报超出数组限制,需要将extract()[0]换成extract_first() 至此爬取任意日期到当前日期之间的竞彩数据完成,可以在数据库中看到完成的数据
扫码关注我们
微信号:SRE实战
拒绝背锅 运筹帷幄

SRE实战 互联网时代守护先锋,助力企业售后服务体系运筹帷幄!一键直达领取阿里云限量特价优惠。