博客
关于我
[07]抓取简书|博客园|CSDN个人主页制作目录
阅读量:617 次
发布时间:2019-03-13

本文共 6433 字,大约阅读时间需要 21 分钟。

文章目录

抓取简书个人主页制作目录

本文代码运行环境pyhton2,代码注释的很详细,直接看代码即可。

#-*- coding:utf-8 -*-import urllib2from lxml import etreeclass CrawlJs():    #定义函数,爬取对应的数据    def getArticle(self,url):        print '█████████████◣开始爬取数据'        my_headers = {            'User-Agent':'Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/59.0.3071.104 Safari/537.36',        }        request = urllib2.Request(url,headers=my_headers)        content = urllib2.urlopen(request).read()        return content    #定义函数,筛选和保存爬取到的数据    def save(self,content):        xml = etree.HTML(content)        title = xml.xpath('//div[@class="content"]/a[@class="title"]/text()')        link = xml.xpath('//div[@class="content"]/a[@class="title"]/@href')        print link        i=-1        for data in title:            print data            i+=1            with open('JsIndex.txt','a+') as f:                f.write('['+data.encode('utf-8')+']'+'('+'http://www.jianshu.com'+link[i]+')'+ '\n')        print '█████████████◣爬取完成!'#定义主程序接口if __name__ == '__main__':    page = int(raw_input('请输入你要抓取的页码总数:'))    for num in range(page):        #这里输入个人主页,如:u/c475403112ce        url = 'http://www.jianshu.com/u/c475403112ce?order_by=shared_at&page=%s'%(num+1)        #调用上边的函数        js = CrawlJs()        #获取页面内容        content = js.getArticle(url)        #保存内容到文本中        js.save(content)

运行结果

运行结果

简书目录实际效果

python3代码

#-*- coding:utf-8 -*-import urllib.requestfrom lxml import etreeclass CrawlJs():    #定义函数,爬取对应的数据    def getArticle(self,url):        print ('█████████████◣开始爬取数据')        my_headers = {            'User-Agent':'Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/59.0.3071.104 Safari/537.36',        }        request = urllib.request.Request(url,headers=my_headers)        content = urllib.request.urlopen(request).read()        return content    #定义函数,筛选和保存爬取到的数据    def save(self,content):        xml = etree.HTML(content)        title = xml.xpath('//div[@class="content"]/a[@class="title"]/text()')        link = xml.xpath('//div[@class="content"]/a[@class="title"]/@href')        print (link)        i=-1        for data in title:            print (data)            i+=1            with open('JsIndex.txt','a+') as f:                f.write('['+data+']'+'('+'http://www.jianshu.com'+link[i]+')'+ '\n')        print ('█████████████◣爬取完成!')#定义主程序接口if __name__ == '__main__':    page = int(input('请输入你要抓取的页码总数:'))    for num in range(page):        #这里输入个人主页,如:u/c475403112ce        url = 'http://www.jianshu.com/u/c475403112ce?order_by=shared_at&page=%s'%(num+1)        js = CrawlJs()        content = js.getArticle(url)        js.save(content)

抓取博客园个人主页制作目录

python2代码

#-*- coding:utf-8 -*-import urllib2from lxml import etreeclass CrawlJs():    #定义函数,爬取对应的数据    def getArticle(self,url):        print '█████████████◣开始爬取数据'        my_headers = {            'User-Agent':'Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/59.0.3071.104 Safari/537.36',        }        request = urllib2.Request(url,headers=my_headers)        content = urllib2.urlopen(request).read()        return content    #定义函数,筛选和保存爬取到的数据    def save(self,content):        xml = etree.HTML(content)        title = xml.xpath('//*[@class="postTitle"]/a/text()')        link = xml.xpath('//*[@class="postTitle"]/a/@href')        print (title,link)        # print(zip(title,link))        # print(map(lambda x,y:[x,y], title,link))        for t,li in zip(title,link):            print(t+li)            with open('bokeyuan.txt','a+') as f:                f.write(t.encode('utf-8')+li+ '\n')        print '█████████████◣爬取完成!'#定义主程序接口if __name__ == '__main__':    page = int(raw_input('请输入你要抓取的页码总数:'))    for num in range(page):        #这里输入个人主页,        url = 'http://www.cnblogs.com/zhouxinfei/default.html?page=%s'%(num+1)        js = CrawlJs()        content = js.getArticle(url)        js.save(content)

python3代码

#-*- coding:utf-8 -*-import urllib.requestfrom lxml import etreeclass CrawlJs():    #定义函数,爬取对应的数据    def getArticle(self,url):        print ('█████████████◣开始爬取数据')        my_headers = {            'User-Agent':'Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/59.0.3071.104 Safari/537.36',        }        request = urllib.request.Request(url,headers=my_headers)        content = urllib.request.urlopen(request).read()        return content    #定义函数,筛选和保存爬取到的数据    def save(self,content):        xml = etree.HTML(content)        title = xml.xpath('//*[@class="postTitle"]/a/text()')        link = xml.xpath('//*[@class="postTitle"]/a/@href')        print (title,link)        # print(zip(title,link))        # print(map(lambda x,y:[x,y], title,link))        for t,li in zip(title,link):            print(t+li)            with open('bokeyuan.txt','a+') as f:                f.write(t+'  '+li+ '\n')        print('█████████████◣爬取完成!')#定义主程序接口if __name__ == '__main__':    page = int(input('请输入你要抓取的页码总数:'))    for num in range(page):        #这里输入个人主页,        url = 'http://www.cnblogs.com/zhouxinfei/default.html?page=%s'%(num+1)        js = CrawlJs()        content = js.getArticle(url)        js.save(content)

CSDN个人目录制作

#-*- coding:utf-8 -*-import urllib.requestfrom lxml import etreeclass CrawlJs():    #定义函数,爬取对应的数据    def getArticle(self,url):        print ('█████████████◣开始爬取数据')        my_headers = {            'User-Agent':'Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/59.0.3071.104 Safari/537.36',        }        request = urllib.request.Request(url,headers=my_headers)        content = urllib.request.urlopen(request).read()        return content    #定义函数,筛选和保存爬取到的数据    def save(self,content):        xml = etree.HTML(content)        title = xml.xpath('//div[@class="article-list"]/div/h4/a/text()[2]')        link = xml.xpath('//div[@class="article-list"]/div/h4/a/@href')        if title==None:            return         # print(map(lambda x,y:[x,y], title,link))        for t,li in zip(title,link):            print(t+li)            with open('csdn.txt','a+') as f:                f.write(t.strip()+'  '+li+ '\n')        print('█████████████◣爬取完成!')#定义主程序接口if __name__ == '__main__':    page = int(input('请输入你要抓取的页码总数:'))    for num in range(page):        #这里输入个人主页,        url = 'https://blog.csdn.net/xc_zhou/article/list/%s'%(num+1)        js = CrawlJs()        content = js.getArticle(url)        js.save(content)

效果图

你可能感兴趣的文章
MySQL的常见命令
查看>>
mysql的引擎以及优缺点_MySQL有哪些存储引擎,各自的优缺点,应用场景-阿里云开发者社区...
查看>>
MySQL的操作:
查看>>
mysql的数据类型有哪些?
查看>>
mysql的语法规范
查看>>
MySql的连接查询
查看>>
mysql的配置文件参数
查看>>
MySQL的错误:No query specified
查看>>
mysql监控工具-PMM,让你更上一层楼(上)
查看>>
mysql监控工具-PMM,让你更上一层楼(下)
查看>>
MySQL相关命令
查看>>
mysql社工库搭建教程_社工库的搭建思路与代码实现
查看>>
Warning: Can't perform a React state update on an unmounted component. This is a no-
查看>>
mysql笔记 (早前的,很乱)
查看>>
MySQL笔记:InnoDB的锁机制
查看>>
mysql第一天~mysql基础【主要是DDL、DML、DQL语句,以及重点掌握存存引擎、查询(模糊查询)】
查看>>
mysql第二天~mysql基础【查询排序、分页查询、多表查询、数据备份与恢复等】
查看>>
MySQL简介和安装
查看>>
MySQL简单查询
查看>>
MySQL管理利器 MySQL Utilities 安装
查看>>