作业来自于:https://edu.cnblogs.com/campus/gzcc/GZCC-16SE2/homework/2941

1.从新闻url获取新闻详情: 字典,anews

SRE实战 互联网时代守护先锋,助力企业售后服务体系运筹帷幄!一键直达领取阿里云限量特价优惠。

2.从列表页的url获取新闻url:列表append(字典) alist

3.生成所页列表页的url并获取全部新闻 :列表extend(列表) allnews

*每个同学爬学号尾数开始的10个列表页

4.设置合理的爬取间隔

import time

import random

time.sleep(random.random()*3)

5.用pandas做简单的数据处理并保存

保存到csv或excel文件 

newsdf.to_csv(r'F:\duym\爬虫\gzccnews.csv')

保存到数据库

import sqlite3
with sqlite3.connect('gzccnewsdb.sqlite') as db:
    newsdf.to_sql('gzccnewsdb',db)

代码实现:

# 获取一篇新闻的全部信息
import re
import requests
import time
import random
import pandas as pd
from bs4 import BeautifulSoup
from datetime import datetime

# 获取新闻id
def newsnum(url):
    newsid = re.match('http://news.gzcc.cn/html/2015/xiaoyuanxinwen_(.*)/(.*).html', url).group(2)
    return newsid

# 发布时间:datetime类型
def newstime(soup):
    newsdate = soup.select('.show-info')[0].text.split()[0].split(':')[1]
    newstime = soup.select('.show-info')[0].text.split()[1]
    time = newsdate + ' ' + newstime
    time = datetime.strptime(time, '%Y-%m-%d %H:%M:%S')
    return time

# 获取点击次数
def getClick(id):
    clickUrl = 'http://oa.gzcc.cn/api.php?op=count&id={}&modelid=80'.format(id)
    res = requests.get(clickUrl)
    click = res.text.split('.html')[-1].lstrip("('").rstrip("');")
    return click

# 获取新闻详情
def getDetails(url):
    res = requests.get(url)
    res.encoding = 'utf-8'
    soup = BeautifulSoup(res.text, 'html.parser')
    anews = {}
    id = newsnum(url);  #新闻id
    anews['新闻编号'] = id;
    anews['标题'] = soup.select('.show-title')[0].text;
    anews['发布时间'] = str(newstime(soup));
    anews['作者'] = re.match('作者:(.*)',soup.select('.show-info')[0].text.split()[2]).group(1);
    #anews['来源'] = re.match('来源:(.*)', soup.select('.show-info')[0].text.split()[3]).group(1);
    anews['点击次数'] = getClick(id);
    anews['url'] = url;
    # anews['内容'] = soup.select('.show-content p')[0].text;
    # print(anews)
    return anews

# 从列表页的url获取新闻url
def getUrl(url):
    res = requests.get(url)
    res.encoding = 'utf-8'
    soup = BeautifulSoup(res.text, 'html.parser')
    alist = []
    for i in range(len(soup.select('.news-list')[0].select('a'))):
        list = soup.select('.news-list')[0].select('a')[i]['href']
        alist.append(list)
    return alist

url = 'http://news.gzcc.cn/html/xiaoyuanxinwen/74.html'
allnews = []
for i in getUrl(url):
    allnews.append(getDetails(i));
    time.sleep(random.random() * 3);

# 生成csv文件
newsdf=pd.DataFrame(allnews)
newsdf.to_csv(r'gzccnews.csv',encoding = 'utf-8')

#生成数据库文件
import sqlite3
with sqlite3.connect('gzccnewsdb.sqlite') as db:
    newsdf.to_sql('gzccnewsdb',db)

print(allnews)

【大数据】爬取全部的校园新闻 随笔 第1张

生成的csv文件:

【大数据】爬取全部的校园新闻 随笔 第2张

生成的sqlite文件:

【大数据】爬取全部的校园新闻 随笔 第3张

 

扫码关注我们
微信号:SRE实战
拒绝背锅 运筹帷幄