博客
关于我
[07]抓取简书|博客园|CSDN个人主页制作目录
阅读量:617 次
发布时间:2019-03-13

本文共 6433 字,大约阅读时间需要 21 分钟。

文章目录

抓取简书个人主页制作目录

本文代码运行环境pyhton2,代码注释的很详细,直接看代码即可。

#-*- coding:utf-8 -*-import urllib2from lxml import etreeclass CrawlJs():    #定义函数,爬取对应的数据    def getArticle(self,url):        print '█████████████◣开始爬取数据'        my_headers = {            'User-Agent':'Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/59.0.3071.104 Safari/537.36',        }        request = urllib2.Request(url,headers=my_headers)        content = urllib2.urlopen(request).read()        return content    #定义函数,筛选和保存爬取到的数据    def save(self,content):        xml = etree.HTML(content)        title = xml.xpath('//div[@class="content"]/a[@class="title"]/text()')        link = xml.xpath('//div[@class="content"]/a[@class="title"]/@href')        print link        i=-1        for data in title:            print data            i+=1            with open('JsIndex.txt','a+') as f:                f.write('['+data.encode('utf-8')+']'+'('+'http://www.jianshu.com'+link[i]+')'+ '\n')        print '█████████████◣爬取完成!'#定义主程序接口if __name__ == '__main__':    page = int(raw_input('请输入你要抓取的页码总数:'))    for num in range(page):        #这里输入个人主页,如:u/c475403112ce        url = 'http://www.jianshu.com/u/c475403112ce?order_by=shared_at&page=%s'%(num+1)        #调用上边的函数        js = CrawlJs()        #获取页面内容        content = js.getArticle(url)        #保存内容到文本中        js.save(content)

运行结果

运行结果

简书目录实际效果

python3代码

#-*- coding:utf-8 -*-import urllib.requestfrom lxml import etreeclass CrawlJs():    #定义函数,爬取对应的数据    def getArticle(self,url):        print ('█████████████◣开始爬取数据')        my_headers = {            'User-Agent':'Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/59.0.3071.104 Safari/537.36',        }        request = urllib.request.Request(url,headers=my_headers)        content = urllib.request.urlopen(request).read()        return content    #定义函数,筛选和保存爬取到的数据    def save(self,content):        xml = etree.HTML(content)        title = xml.xpath('//div[@class="content"]/a[@class="title"]/text()')        link = xml.xpath('//div[@class="content"]/a[@class="title"]/@href')        print (link)        i=-1        for data in title:            print (data)            i+=1            with open('JsIndex.txt','a+') as f:                f.write('['+data+']'+'('+'http://www.jianshu.com'+link[i]+')'+ '\n')        print ('█████████████◣爬取完成!')#定义主程序接口if __name__ == '__main__':    page = int(input('请输入你要抓取的页码总数:'))    for num in range(page):        #这里输入个人主页,如:u/c475403112ce        url = 'http://www.jianshu.com/u/c475403112ce?order_by=shared_at&page=%s'%(num+1)        js = CrawlJs()        content = js.getArticle(url)        js.save(content)

抓取博客园个人主页制作目录

python2代码

#-*- coding:utf-8 -*-import urllib2from lxml import etreeclass CrawlJs():    #定义函数,爬取对应的数据    def getArticle(self,url):        print '█████████████◣开始爬取数据'        my_headers = {            'User-Agent':'Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/59.0.3071.104 Safari/537.36',        }        request = urllib2.Request(url,headers=my_headers)        content = urllib2.urlopen(request).read()        return content    #定义函数,筛选和保存爬取到的数据    def save(self,content):        xml = etree.HTML(content)        title = xml.xpath('//*[@class="postTitle"]/a/text()')        link = xml.xpath('//*[@class="postTitle"]/a/@href')        print (title,link)        # print(zip(title,link))        # print(map(lambda x,y:[x,y], title,link))        for t,li in zip(title,link):            print(t+li)            with open('bokeyuan.txt','a+') as f:                f.write(t.encode('utf-8')+li+ '\n')        print '█████████████◣爬取完成!'#定义主程序接口if __name__ == '__main__':    page = int(raw_input('请输入你要抓取的页码总数:'))    for num in range(page):        #这里输入个人主页,        url = 'http://www.cnblogs.com/zhouxinfei/default.html?page=%s'%(num+1)        js = CrawlJs()        content = js.getArticle(url)        js.save(content)

python3代码

#-*- coding:utf-8 -*-import urllib.requestfrom lxml import etreeclass CrawlJs():    #定义函数,爬取对应的数据    def getArticle(self,url):        print ('█████████████◣开始爬取数据')        my_headers = {            'User-Agent':'Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/59.0.3071.104 Safari/537.36',        }        request = urllib.request.Request(url,headers=my_headers)        content = urllib.request.urlopen(request).read()        return content    #定义函数,筛选和保存爬取到的数据    def save(self,content):        xml = etree.HTML(content)        title = xml.xpath('//*[@class="postTitle"]/a/text()')        link = xml.xpath('//*[@class="postTitle"]/a/@href')        print (title,link)        # print(zip(title,link))        # print(map(lambda x,y:[x,y], title,link))        for t,li in zip(title,link):            print(t+li)            with open('bokeyuan.txt','a+') as f:                f.write(t+'  '+li+ '\n')        print('█████████████◣爬取完成!')#定义主程序接口if __name__ == '__main__':    page = int(input('请输入你要抓取的页码总数:'))    for num in range(page):        #这里输入个人主页,        url = 'http://www.cnblogs.com/zhouxinfei/default.html?page=%s'%(num+1)        js = CrawlJs()        content = js.getArticle(url)        js.save(content)

CSDN个人目录制作

#-*- coding:utf-8 -*-import urllib.requestfrom lxml import etreeclass CrawlJs():    #定义函数,爬取对应的数据    def getArticle(self,url):        print ('█████████████◣开始爬取数据')        my_headers = {            'User-Agent':'Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/59.0.3071.104 Safari/537.36',        }        request = urllib.request.Request(url,headers=my_headers)        content = urllib.request.urlopen(request).read()        return content    #定义函数,筛选和保存爬取到的数据    def save(self,content):        xml = etree.HTML(content)        title = xml.xpath('//div[@class="article-list"]/div/h4/a/text()[2]')        link = xml.xpath('//div[@class="article-list"]/div/h4/a/@href')        if title==None:            return         # print(map(lambda x,y:[x,y], title,link))        for t,li in zip(title,link):            print(t+li)            with open('csdn.txt','a+') as f:                f.write(t.strip()+'  '+li+ '\n')        print('█████████████◣爬取完成!')#定义主程序接口if __name__ == '__main__':    page = int(input('请输入你要抓取的页码总数:'))    for num in range(page):        #这里输入个人主页,        url = 'https://blog.csdn.net/xc_zhou/article/list/%s'%(num+1)        js = CrawlJs()        content = js.getArticle(url)        js.save(content)

效果图

你可能感兴趣的文章
MySQL 证明为什么用limit时,offset很大会影响性能
查看>>
Mysql 语句操作索引SQL语句
查看>>
MySQL 误操作后数据恢复(update,delete忘加where条件)
查看>>
MySQL 调优/优化的 101 个建议!
查看>>
mysql 转义字符用法_MySql 转义字符的使用说明
查看>>
mysql 输入密码秒退
查看>>
mysql 递归查找父节点_MySQL递归查询树状表的子节点、父节点具体实现
查看>>
mysql 通过查看mysql 配置参数、状态来优化你的mysql
查看>>
mysql 里对root及普通用户赋权及更改密码的一些命令
查看>>
Mysql 重置自增列的开始序号
查看>>
mysql 锁机制 mvcc_Mysql性能优化-事务、锁和MVCC
查看>>
MySQL 错误
查看>>
mysql 随机数 rand使用
查看>>
MySQL 面试题汇总
查看>>
MySQL 面试,必须掌握的 8 大核心点
查看>>
MySQL 高可用性之keepalived+mysql双主
查看>>
MySQL 高性能优化规范建议
查看>>
mysql 默认事务隔离级别下锁分析
查看>>
Mysql--逻辑架构
查看>>
MySql-2019-4-21-复习
查看>>