欧美一级特黄大片做受成人-亚洲成人一区二区电影-激情熟女一区二区三区-日韩专区欧美专区国产专区

scrapy框架爬取大樂透數(shù)據(jù)-創(chuàng)新互聯(lián)

github項目地址:?https://github.com/v587xpt/lottery_spider
#

創(chuàng)新互聯(lián)主營張店網(wǎng)站建設(shè)的網(wǎng)絡(luò)公司,主營網(wǎng)站建設(shè)方案,app開發(fā)定制,張店h5成都小程序開發(fā)搭建,張店網(wǎng)站營銷推廣歡迎張店等地區(qū)企業(yè)咨詢

上次做了一個雙色球的數(shù)據(jù)爬取,其實大樂透的爬取也很簡單,使用request就可以爬取,但是為了更好的進步,這次爬取大樂透采用了scrapy框架。

scrapy框架的運行機制不介紹了,不懂的先去google了解下吧;


..
..

一、創(chuàng)建項目

我使用的是windows進行開發(fā)的,所以需要在windows上安裝好scrapy;假設(shè)已安裝好該框架;

1、打開cmd,運行

scrapy startproject?lottery_spider

命令,會在命令運行的文件下生成一個lottery_spider的項目
.
2、再執(zhí)行 cd?lottery_spider 進入lottery_spider項目,執(zhí)行??

scrapy gensiper?lottery "www.lottery.gov.cn"

lottery 為爬蟲文件;

www.lottery.gov.cn 為目標網(wǎng)站;

創(chuàng)建完畢后會在項目的 spider文件夾下生成爬蟲文件:?lottery.py
..
..
?

二、項目內(nèi)的各個文件代碼

1、items.py

# -*- coding: utf-8 -*-

# Define here the models for your scraped items
#
# See documentation in:
# https://docs.scrapy.org/en/latest/topics/items.html

import scrapy

class LotterySpiderItem(scrapy.Item):
    qihao = scrapy.Field()
    bule_ball = scrapy.Field()
    red_ball = scrapy.Field()

此文件定義了數(shù)據(jù)的模型,就是數(shù)據(jù)的參數(shù);qihao、bule_ball、red_ball ;
.
2、lottery.py

# -*- coding: utf-8 -*-
import scrapy
from lottery_spider.items import LotterySpiderItem

class LotterySpider(scrapy.Spider):
    name = 'lottery'
    allowed_domains = ['gov.cn']        #允許爬蟲爬取目標網(wǎng)站的域名,此域名之外的不會爬取;
    start_urls = ['http://www.lottery.gov.cn/historykj/history_1.jspx?_ltype=dlt']  #起始頁;從合格web開始爬??;

    def parse(self, response):
        #使用xpath獲取數(shù)據(jù)前的路徑,返回一個list的格式數(shù)據(jù);
        results = response.xpath("http://div[@class='yylMain']//div[@class='result']//tbody//tr")
        for result in results:  #results數(shù)據(jù)需要for循環(huán)遍歷;
            qihao = result.xpath(".//td[1]//text()").get()
            bule_ball_1 = result.xpath(".//td[2]//text()").get()
            bule_ball_2 = result.xpath(".//td[3]//text()").get()
            bule_ball_3 = result.xpath(".//td[4]//text()").get()
            bule_ball_4 = result.xpath(".//td[5]//text()").get()
            bule_ball_5 = result.xpath(".//td[6]//text()").get()
            red_ball_1 = result.xpath(".//td[7]//text()").get()
            red_ball_2 = result.xpath(".//td[8]//text()").get()

            bule_ball_list = []     #定義一個列表,用于存儲五個藍球
            bule_ball_list.append(bule_ball_1)
            bule_ball_list.append(bule_ball_2)
            bule_ball_list.append(bule_ball_3)
            bule_ball_list.append(bule_ball_4)
            bule_ball_list.append(bule_ball_5)

            red_ball_list = []      #定義一個列表,用于存儲2個紅球
            red_ball_list.append(red_ball_1)
            red_ball_list.append(red_ball_2)

            print("===================================================")
            print("?期號:"+ str(qihao) + " ?" + "藍球:"+ str(bule_ball_list) + " ?" + "紅球" + str(red_ball_list))

            item = LotterySpiderItem(qihao = qihao,bule_ball = bule_ball_list,red_ball = red_ball_list)
            yield item

        next_url = response.xpath("http://div[@class='page']/div/a[3]/@href").get()
        if not next_url:
            return
        else:
            last_url = "http://www.lottery.gov.cn/historykj/" + next_url
            yield scrapy.Request(last_url,callback=self.parse)  #這里調(diào)用parse方法的時候不用加();

此文件是運行的爬蟲文件;

.
3、pipelines.py

# -*- coding: utf-8 -*-

# Define your item pipelines here
#
# Don't forget to add your pipeline to the ITEM_PIPELINES setting
# See: https://docs.scrapy.org/en/latest/topics/item-pipeline.html
import json

class LotterySpiderPipeline(object):
    def __init__(self):
        print("爬蟲開始......")
        self.fp = open("daletou.json", 'w', encoding='utf-8')  # 打開一個json文件

    def process_item(self, item, spider):
        item_json = json.dumps(dict(item), ensure_ascii=False)      #注意此處的item,需要dict來進行序列化;
        self.fp.write(item_json + '\n')
        return item

    def close_spider(self,spider):
        self.fp.close()
        print("爬蟲結(jié)束......")

此文件負責(zé)數(shù)據(jù)的保存,代碼中將數(shù)據(jù)保存為了json數(shù)據(jù);
.
4、settings.py

# -*- coding: utf-8 -*-

# Scrapy settings for lottery_spider project
#
# For simplicity, this file contains only settings considered important or
# commonly used. You can find more settings consulting the documentation:
#
#     https://docs.scrapy.org/en/latest/topics/settings.html
#     https://docs.scrapy.org/en/latest/topics/downloader-middleware.html
#     https://docs.scrapy.org/en/latest/topics/spider-middleware.html

BOT_NAME = 'lottery_spider'

SPIDER_MODULES = ['lottery_spider.spiders']
NEWSPIDER_MODULE = 'lottery_spider.spiders'

# Crawl responsibly by identifying yourself (and your website) on the user-agent
#USER_AGENT = 'lottery_spider (+http://www.yourdomain.com)'

# Obey robots.txt rules
ROBOTSTXT_OBEY = False    #False,不去尋找網(wǎng)站設(shè)置的rebots.txt文件;

# Configure maximum concurrent requests performed by Scrapy (default: 16)
#CONCURRENT_REQUESTS = 32

# Configure a delay for requests for the same website (default: 0)
# See https://docs.scrapy.org/en/latest/topics/settings.html#download-delay
# See also autothrottle settings and docs
DOWNLOAD_DELAY = 1      #配置爬蟲速度,1秒一次
# The download delay setting will honor only one of:
#CONCURRENT_REQUESTS_PER_DOMAIN = 16
#CONCURRENT_REQUESTS_PER_IP = 16

# Disable cookies (enabled by default)
#COOKIES_ENABLED = False

# Disable Telnet Console (enabled by default)
#TELNETCONSOLE_ENABLED = False

# Override the default request headers:
DEFAULT_REQUEST_HEADERS = {        #配置爬蟲的請求頭,模擬瀏覽器請求;
    'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',
    'Accept-Language': 'en',
    'User-Agent':'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/77.0.3865.120 Safari/537.36'
}

# Enable or disable spider middlewares
# See https://docs.scrapy.org/en/latest/topics/spider-middleware.html
#SPIDER_MIDDLEWARES = {
#    'lottery_spider.middlewares.LotterySpiderSpiderMiddleware': 543,
#}

# Enable or disable downloader middlewares
# See https://docs.scrapy.org/en/latest/topics/downloader-middleware.html
#DOWNLOADER_MIDDLEWARES = {
#    'lottery_spider.middlewares.LotterySpiderDownloaderMiddleware': 543,
#}

# Enable or disable extensions
# See https://docs.scrapy.org/en/latest/topics/extensions.html
#EXTENSIONS = {
#    'scrapy.extensions.telnet.TelnetConsole': None,
#}

# Configure item pipelines
# See https://docs.scrapy.org/en/latest/topics/item-pipeline.html
ITEM_PIPELINES = {    #取消此配置的注釋,讓pipelines.py可以運行;
   'lottery_spider.pipelines.LotterySpiderPipeline': 300,
}

# Enable and configure the AutoThrottle extension (disabled by default)
# See https://docs.scrapy.org/en/latest/topics/autothrottle.html
#AUTOTHROTTLE_ENABLED = True
# The initial download delay
#AUTOTHROTTLE_START_DELAY = 5
# The maximum download delay to be set in case of high latencies
#AUTOTHROTTLE_MAX_DELAY = 60
# The average number of requests Scrapy should be sending in parallel to
# each remote server
#AUTOTHROTTLE_TARGET_CONCURRENCY = 1.0
# Enable showing throttling stats for every response received:
#AUTOTHROTTLE_DEBUG = False

# Enable and configure HTTP caching (disabled by default)
# See https://docs.scrapy.org/en/latest/topics/downloader-middleware.html#httpcache-middleware-settings
#HTTPCACHE_ENABLED = True
#HTTPCACHE_EXPIRATION_SECS = 0
#HTTPCACHE_DIR = 'httpcache'
#HTTPCACHE_IGNORE_HTTP_CODES = []
#HTTPCACHE_STORAGE = 'scrapy.extensions.httpcache.FilesystemCacheStorage'

此文件是整個爬蟲項目的運行配置文件;
.
5、start.py

from scrapy import cmdline

cmdline.execute("scrapy crawl lottery".split())
#等價于 ↓
# cmdline.execute(["scrapy","crawl","xiaoshuo"])

此文件是新建的文件,配置后就不用cmd中執(zhí)行命令運行項目了;

另外有需要云服務(wù)器可以了解下創(chuàng)新互聯(lián)cdcxhl.cn,海內(nèi)外云服務(wù)器15元起步,三天無理由+7*72小時售后在線,公司持有idc許可證,提供“云服務(wù)器、裸金屬服務(wù)器、高防服務(wù)器、香港服務(wù)器、美國服務(wù)器、虛擬主機、免備案服務(wù)器”等云主機租用服務(wù)以及企業(yè)上云的綜合解決方案,具有“安全穩(wěn)定、簡單易用、服務(wù)可用性高、性價比高”等特點與優(yōu)勢,專為企業(yè)上云打造定制,能夠滿足用戶豐富、多元化的應(yīng)用場景需求。

名稱欄目:scrapy框架爬取大樂透數(shù)據(jù)-創(chuàng)新互聯(lián)
文章出自:http://aaarwkj.com/article40/hoieo.html

成都網(wǎng)站建設(shè)公司_創(chuàng)新互聯(lián),為您提供品牌網(wǎng)站制作響應(yīng)式網(wǎng)站、營銷型網(wǎng)站建設(shè)移動網(wǎng)站建設(shè)、靜態(tài)網(wǎng)站建站公司

廣告

聲明:本網(wǎng)站發(fā)布的內(nèi)容(圖片、視頻和文字)以用戶投稿、用戶轉(zhuǎn)載內(nèi)容為主,如果涉及侵權(quán)請盡快告知,我們將會在第一時間刪除。文章觀點不代表本網(wǎng)站立場,如需處理請聯(lián)系客服。電話:028-86922220;郵箱:631063699@qq.com。內(nèi)容未經(jīng)允許不得轉(zhuǎn)載,或轉(zhuǎn)載時需注明來源: 創(chuàng)新互聯(lián)

成都seo排名網(wǎng)站優(yōu)化
日韩亚洲av在线免费观看| 欧美日韩精品激情一区二区| 亚洲精品一区二区成人影院| 色哟哟视频在线免费观看| 91午夜福利视频免费播放| 国产精品亚洲国产精品| 人妻乱人伦中文字幕在线| 日韩精品高清视频在线观看| 亚洲三级成人一区在线 | 欧美黄色一级在线免费观看| 国产毛片精品一区内射| 欧美精品福利一区二区| 国产av一区二区三区久久| 日韩欧美亚洲国产一区久久精品| 亚洲男人天堂免费观看| 中文字幕一区二区av| 免费在线黄色生活大片| 日韩人妻一级免费视频| 日本道加勒比二三五区视频| 日韩av高清在线免费观看| 免费观看中国性生活片| 99热这里只有精品网址| 黑丝美女被内射视频免费观看| 欧美黄色成人免费网站| 中文人妻熟妇乱又伦精品| 午夜宅男在线视频观看| 一区二区三区四区在线视频观看| 久久国产精品午夜视频| 日韩有码一区在线观看| 天堂av中文字幕在线不卡| 在线国产丝袜自拍观看| 欧美亚洲中文字幕高清| 日本久久精品免费网站| 免费黄片视频大全在线播放| 日韩欧美国产精品专区| 久久久国产精品视频一区| 亚洲伊人久久一区二区| 久久久国产一区二区三区| 亚洲国内一区二区三区| 91九色国产成人久久精品| 激情亚洲欧美日韩精品|