博客
关于我
[07]抓取简书|博客园|CSDN个人主页制作目录
阅读量:617 次
发布时间:2019-03-13

本文共 6433 字,大约阅读时间需要 21 分钟。

文章目录

抓取简书个人主页制作目录

本文代码运行环境pyhton2,代码注释的很详细,直接看代码即可。

#-*- coding:utf-8 -*-import urllib2from lxml import etreeclass CrawlJs():    #定义函数,爬取对应的数据    def getArticle(self,url):        print '█████████████◣开始爬取数据'        my_headers = {            'User-Agent':'Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/59.0.3071.104 Safari/537.36',        }        request = urllib2.Request(url,headers=my_headers)        content = urllib2.urlopen(request).read()        return content    #定义函数,筛选和保存爬取到的数据    def save(self,content):        xml = etree.HTML(content)        title = xml.xpath('//div[@class="content"]/a[@class="title"]/text()')        link = xml.xpath('//div[@class="content"]/a[@class="title"]/@href')        print link        i=-1        for data in title:            print data            i+=1            with open('JsIndex.txt','a+') as f:                f.write('['+data.encode('utf-8')+']'+'('+'http://www.jianshu.com'+link[i]+')'+ '\n')        print '█████████████◣爬取完成!'#定义主程序接口if __name__ == '__main__':    page = int(raw_input('请输入你要抓取的页码总数:'))    for num in range(page):        #这里输入个人主页,如:u/c475403112ce        url = 'http://www.jianshu.com/u/c475403112ce?order_by=shared_at&page=%s'%(num+1)        #调用上边的函数        js = CrawlJs()        #获取页面内容        content = js.getArticle(url)        #保存内容到文本中        js.save(content)

运行结果

运行结果

简书目录实际效果

python3代码

#-*- coding:utf-8 -*-import urllib.requestfrom lxml import etreeclass CrawlJs():    #定义函数,爬取对应的数据    def getArticle(self,url):        print ('█████████████◣开始爬取数据')        my_headers = {            'User-Agent':'Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/59.0.3071.104 Safari/537.36',        }        request = urllib.request.Request(url,headers=my_headers)        content = urllib.request.urlopen(request).read()        return content    #定义函数,筛选和保存爬取到的数据    def save(self,content):        xml = etree.HTML(content)        title = xml.xpath('//div[@class="content"]/a[@class="title"]/text()')        link = xml.xpath('//div[@class="content"]/a[@class="title"]/@href')        print (link)        i=-1        for data in title:            print (data)            i+=1            with open('JsIndex.txt','a+') as f:                f.write('['+data+']'+'('+'http://www.jianshu.com'+link[i]+')'+ '\n')        print ('█████████████◣爬取完成!')#定义主程序接口if __name__ == '__main__':    page = int(input('请输入你要抓取的页码总数:'))    for num in range(page):        #这里输入个人主页,如:u/c475403112ce        url = 'http://www.jianshu.com/u/c475403112ce?order_by=shared_at&page=%s'%(num+1)        js = CrawlJs()        content = js.getArticle(url)        js.save(content)

抓取博客园个人主页制作目录

python2代码

#-*- coding:utf-8 -*-import urllib2from lxml import etreeclass CrawlJs():    #定义函数,爬取对应的数据    def getArticle(self,url):        print '█████████████◣开始爬取数据'        my_headers = {            'User-Agent':'Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/59.0.3071.104 Safari/537.36',        }        request = urllib2.Request(url,headers=my_headers)        content = urllib2.urlopen(request).read()        return content    #定义函数,筛选和保存爬取到的数据    def save(self,content):        xml = etree.HTML(content)        title = xml.xpath('//*[@class="postTitle"]/a/text()')        link = xml.xpath('//*[@class="postTitle"]/a/@href')        print (title,link)        # print(zip(title,link))        # print(map(lambda x,y:[x,y], title,link))        for t,li in zip(title,link):            print(t+li)            with open('bokeyuan.txt','a+') as f:                f.write(t.encode('utf-8')+li+ '\n')        print '█████████████◣爬取完成!'#定义主程序接口if __name__ == '__main__':    page = int(raw_input('请输入你要抓取的页码总数:'))    for num in range(page):        #这里输入个人主页,        url = 'http://www.cnblogs.com/zhouxinfei/default.html?page=%s'%(num+1)        js = CrawlJs()        content = js.getArticle(url)        js.save(content)

python3代码

#-*- coding:utf-8 -*-import urllib.requestfrom lxml import etreeclass CrawlJs():    #定义函数,爬取对应的数据    def getArticle(self,url):        print ('█████████████◣开始爬取数据')        my_headers = {            'User-Agent':'Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/59.0.3071.104 Safari/537.36',        }        request = urllib.request.Request(url,headers=my_headers)        content = urllib.request.urlopen(request).read()        return content    #定义函数,筛选和保存爬取到的数据    def save(self,content):        xml = etree.HTML(content)        title = xml.xpath('//*[@class="postTitle"]/a/text()')        link = xml.xpath('//*[@class="postTitle"]/a/@href')        print (title,link)        # print(zip(title,link))        # print(map(lambda x,y:[x,y], title,link))        for t,li in zip(title,link):            print(t+li)            with open('bokeyuan.txt','a+') as f:                f.write(t+'  '+li+ '\n')        print('█████████████◣爬取完成!')#定义主程序接口if __name__ == '__main__':    page = int(input('请输入你要抓取的页码总数:'))    for num in range(page):        #这里输入个人主页,        url = 'http://www.cnblogs.com/zhouxinfei/default.html?page=%s'%(num+1)        js = CrawlJs()        content = js.getArticle(url)        js.save(content)

CSDN个人目录制作

#-*- coding:utf-8 -*-import urllib.requestfrom lxml import etreeclass CrawlJs():    #定义函数,爬取对应的数据    def getArticle(self,url):        print ('█████████████◣开始爬取数据')        my_headers = {            'User-Agent':'Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/59.0.3071.104 Safari/537.36',        }        request = urllib.request.Request(url,headers=my_headers)        content = urllib.request.urlopen(request).read()        return content    #定义函数,筛选和保存爬取到的数据    def save(self,content):        xml = etree.HTML(content)        title = xml.xpath('//div[@class="article-list"]/div/h4/a/text()[2]')        link = xml.xpath('//div[@class="article-list"]/div/h4/a/@href')        if title==None:            return         # print(map(lambda x,y:[x,y], title,link))        for t,li in zip(title,link):            print(t+li)            with open('csdn.txt','a+') as f:                f.write(t.strip()+'  '+li+ '\n')        print('█████████████◣爬取完成!')#定义主程序接口if __name__ == '__main__':    page = int(input('请输入你要抓取的页码总数:'))    for num in range(page):        #这里输入个人主页,        url = 'https://blog.csdn.net/xc_zhou/article/list/%s'%(num+1)        js = CrawlJs()        content = js.getArticle(url)        js.save(content)

效果图

你可能感兴趣的文章
mysql8的安装与卸载
查看>>
MySQL8,体验不一样的安装方式!
查看>>
MySQL: Host '127.0.0.1' is not allowed to connect to this MySQL server
查看>>
Mysql: 对换(替换)两条记录的同一个字段值
查看>>
mysql:Can‘t connect to local MySQL server through socket ‘/var/run/mysqld/mysqld.sock‘解决方法
查看>>
MYSQL:基础——3N范式的表结构设计
查看>>
MYSQL:基础——触发器
查看>>
Mysql:连接报错“closing inbound before receiving peer‘s close_notify”
查看>>
mysqlbinlog报错unknown variable ‘default-character-set=utf8mb4‘
查看>>
mysqldump 参数--lock-tables浅析
查看>>
mysqldump 导出中文乱码
查看>>
mysqldump 导出数据库中每张表的前n条
查看>>
mysqldump: Got error: 1044: Access denied for user ‘xx’@’xx’ to database ‘xx’ when using LOCK TABLES
查看>>
Mysqldump参数大全(参数来源于mysql5.5.19源码)
查看>>
mysqldump备份时忽略某些表
查看>>
mysqldump实现数据备份及灾难恢复
查看>>
mysqldump数据库备份无法进行操作只能查询 --single-transaction
查看>>
mysqldump的一些用法
查看>>
mysqli
查看>>
MySQLIntegrityConstraintViolationException异常处理
查看>>