博客
关于我
[07]抓取简书|博客园|CSDN个人主页制作目录
阅读量:617 次
发布时间:2019-03-13

本文共 6433 字,大约阅读时间需要 21 分钟。

文章目录

抓取简书个人主页制作目录

本文代码运行环境pyhton2,代码注释的很详细,直接看代码即可。

#-*- coding:utf-8 -*-import urllib2from lxml import etreeclass CrawlJs():    #定义函数,爬取对应的数据    def getArticle(self,url):        print '█████████████◣开始爬取数据'        my_headers = {            'User-Agent':'Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/59.0.3071.104 Safari/537.36',        }        request = urllib2.Request(url,headers=my_headers)        content = urllib2.urlopen(request).read()        return content    #定义函数,筛选和保存爬取到的数据    def save(self,content):        xml = etree.HTML(content)        title = xml.xpath('//div[@class="content"]/a[@class="title"]/text()')        link = xml.xpath('//div[@class="content"]/a[@class="title"]/@href')        print link        i=-1        for data in title:            print data            i+=1            with open('JsIndex.txt','a+') as f:                f.write('['+data.encode('utf-8')+']'+'('+'http://www.jianshu.com'+link[i]+')'+ '\n')        print '█████████████◣爬取完成!'#定义主程序接口if __name__ == '__main__':    page = int(raw_input('请输入你要抓取的页码总数:'))    for num in range(page):        #这里输入个人主页,如:u/c475403112ce        url = 'http://www.jianshu.com/u/c475403112ce?order_by=shared_at&page=%s'%(num+1)        #调用上边的函数        js = CrawlJs()        #获取页面内容        content = js.getArticle(url)        #保存内容到文本中        js.save(content)

运行结果

运行结果

简书目录实际效果

python3代码

#-*- coding:utf-8 -*-import urllib.requestfrom lxml import etreeclass CrawlJs():    #定义函数,爬取对应的数据    def getArticle(self,url):        print ('█████████████◣开始爬取数据')        my_headers = {            'User-Agent':'Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/59.0.3071.104 Safari/537.36',        }        request = urllib.request.Request(url,headers=my_headers)        content = urllib.request.urlopen(request).read()        return content    #定义函数,筛选和保存爬取到的数据    def save(self,content):        xml = etree.HTML(content)        title = xml.xpath('//div[@class="content"]/a[@class="title"]/text()')        link = xml.xpath('//div[@class="content"]/a[@class="title"]/@href')        print (link)        i=-1        for data in title:            print (data)            i+=1            with open('JsIndex.txt','a+') as f:                f.write('['+data+']'+'('+'http://www.jianshu.com'+link[i]+')'+ '\n')        print ('█████████████◣爬取完成!')#定义主程序接口if __name__ == '__main__':    page = int(input('请输入你要抓取的页码总数:'))    for num in range(page):        #这里输入个人主页,如:u/c475403112ce        url = 'http://www.jianshu.com/u/c475403112ce?order_by=shared_at&page=%s'%(num+1)        js = CrawlJs()        content = js.getArticle(url)        js.save(content)

抓取博客园个人主页制作目录

python2代码

#-*- coding:utf-8 -*-import urllib2from lxml import etreeclass CrawlJs():    #定义函数,爬取对应的数据    def getArticle(self,url):        print '█████████████◣开始爬取数据'        my_headers = {            'User-Agent':'Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/59.0.3071.104 Safari/537.36',        }        request = urllib2.Request(url,headers=my_headers)        content = urllib2.urlopen(request).read()        return content    #定义函数,筛选和保存爬取到的数据    def save(self,content):        xml = etree.HTML(content)        title = xml.xpath('//*[@class="postTitle"]/a/text()')        link = xml.xpath('//*[@class="postTitle"]/a/@href')        print (title,link)        # print(zip(title,link))        # print(map(lambda x,y:[x,y], title,link))        for t,li in zip(title,link):            print(t+li)            with open('bokeyuan.txt','a+') as f:                f.write(t.encode('utf-8')+li+ '\n')        print '█████████████◣爬取完成!'#定义主程序接口if __name__ == '__main__':    page = int(raw_input('请输入你要抓取的页码总数:'))    for num in range(page):        #这里输入个人主页,        url = 'http://www.cnblogs.com/zhouxinfei/default.html?page=%s'%(num+1)        js = CrawlJs()        content = js.getArticle(url)        js.save(content)

python3代码

#-*- coding:utf-8 -*-import urllib.requestfrom lxml import etreeclass CrawlJs():    #定义函数,爬取对应的数据    def getArticle(self,url):        print ('█████████████◣开始爬取数据')        my_headers = {            'User-Agent':'Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/59.0.3071.104 Safari/537.36',        }        request = urllib.request.Request(url,headers=my_headers)        content = urllib.request.urlopen(request).read()        return content    #定义函数,筛选和保存爬取到的数据    def save(self,content):        xml = etree.HTML(content)        title = xml.xpath('//*[@class="postTitle"]/a/text()')        link = xml.xpath('//*[@class="postTitle"]/a/@href')        print (title,link)        # print(zip(title,link))        # print(map(lambda x,y:[x,y], title,link))        for t,li in zip(title,link):            print(t+li)            with open('bokeyuan.txt','a+') as f:                f.write(t+'  '+li+ '\n')        print('█████████████◣爬取完成!')#定义主程序接口if __name__ == '__main__':    page = int(input('请输入你要抓取的页码总数:'))    for num in range(page):        #这里输入个人主页,        url = 'http://www.cnblogs.com/zhouxinfei/default.html?page=%s'%(num+1)        js = CrawlJs()        content = js.getArticle(url)        js.save(content)

CSDN个人目录制作

#-*- coding:utf-8 -*-import urllib.requestfrom lxml import etreeclass CrawlJs():    #定义函数,爬取对应的数据    def getArticle(self,url):        print ('█████████████◣开始爬取数据')        my_headers = {            'User-Agent':'Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/59.0.3071.104 Safari/537.36',        }        request = urllib.request.Request(url,headers=my_headers)        content = urllib.request.urlopen(request).read()        return content    #定义函数,筛选和保存爬取到的数据    def save(self,content):        xml = etree.HTML(content)        title = xml.xpath('//div[@class="article-list"]/div/h4/a/text()[2]')        link = xml.xpath('//div[@class="article-list"]/div/h4/a/@href')        if title==None:            return         # print(map(lambda x,y:[x,y], title,link))        for t,li in zip(title,link):            print(t+li)            with open('csdn.txt','a+') as f:                f.write(t.strip()+'  '+li+ '\n')        print('█████████████◣爬取完成!')#定义主程序接口if __name__ == '__main__':    page = int(input('请输入你要抓取的页码总数:'))    for num in range(page):        #这里输入个人主页,        url = 'https://blog.csdn.net/xc_zhou/article/list/%s'%(num+1)        js = CrawlJs()        content = js.getArticle(url)        js.save(content)

效果图

你可能感兴趣的文章
Mysql InnoDB存储引擎中缓冲池Buffer Pool、Redo Log、Bin Log、Undo Log、Channge Buffer
查看>>
MySQL InnoDB引擎的锁机制详解
查看>>
Mysql INNODB引擎行锁的3种算法 Record Lock Next-Key Lock Grap Lock
查看>>
mysql InnoDB数据存储引擎 的B+树索引原理
查看>>
mysql interval显示条件值_MySQL INTERVAL关键字可以使用哪些不同的单位值?
查看>>
mysql problems
查看>>
MySQL replace函数替换字符串语句的用法(mysql字符串替换)
查看>>
mysql workbench6.3.5_MySQL Workbench
查看>>
MySQL Workbench安装教程以及菜单汉化
查看>>
MySQL Xtrabackup 安装、备份、恢复
查看>>
mysql [Err] 1436 - Thread stack overrun: 129464 bytes used of a 286720 byte stack, and 160000 bytes
查看>>
MySQL _ MySQL常用操作
查看>>
MySQL – 导出数据成csv
查看>>
MySQL —— 在CentOS9下安装MySQL
查看>>
mysql 不区分大小写
查看>>
mysql 两列互转
查看>>
MySQL 中开启二进制日志(Binlog)
查看>>
MySQL 中文问题
查看>>
MySQL 中日志的面试题总结
查看>>
MySQL 中随机抽样:order by rand limit 的替代方案
查看>>