找传奇、传世资源到传世资源站!

baidu标题爬虫

8.5玩家评分(1人评分)
下载后可评
介绍 评论 失效链接反馈

可以批量获取百度结果的标题。
baidu标题爬虫 Python-第1张# -*- coding: utf-8 -*-"""Created on Mon Aug 23 15:38:33 2021Beautiful Soup是python的一个库,最主要的功能是从网页抓取数据。@参考: https://blog.csdn.net/qq_34320337/article/details/104997452

"""import timefrom bs4 import BeautifulSoup #处理抓到的页面import sysimport requestsimport reimport importlibimportlib.reload(sys) #编码转换,python3默认utf-8,一般不用加 #ff = open('baocun.txt', 'w') headers = { 'Accept': 'text/html,application/xhtml xml,application/xml;q=0.9,*/*;q=0.8', 'Accept-Encoding': 'gzip, deflate, compress', 'Accept-Language': 'en-us;q=0.5,en;q=0.3', 'Cache-Control': 'max-age=0', 'Connection': 'keep-alive', 'User-Agent': 'Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:22.0) Gecko/20100101 Firefox/22.0'} #定义头文件 def getfromBaidu(): start = time.clock() for k in range(1, 3): geturl(k) end = time.clock() print(end-start) def geturl(k): number = str((k - 1) * 10) path = 'https://www.baidu.com/s?wd=%E5%92%96%E5%95%A1&pn=' number '&oq=%E5%92%96%E5%95%A1&ie=utf-8&usm=1&rsv_pq=9ccd7f6500120ebb&rsv_t=d92fDeHr8TAXzN%2FuqzNW3xd3BcU3lunThKY2lkUUobFc3Ihjx46MPW4iNbc' #print(path) content = requests.get(path,headers=headers) #使用BeautifulSoup解析html soup = BeautifulSoup(content.text,'html.parser') tagh3 = soup.find_all('div', { 'class', 'result c-container '}) #print(tagh3) for h3 in tagh3: try: title = h3.find(name = "h3", attrs = { "class": re.compile( "t")}).find('a').text.replace("\"","") print(title) #ff.write(title '\n') except: title = '' try: abstract = h3.find(name = "div", attrs = { "class": re.compile( "c-abstract")}).text.replace("\"","") print(abstract) #ff.write(abstract '\n') except: abstract = '' try: url = h3.find(name = "a", attrs = { "class": re.compile( "c-showurl")}).get('href') print(url '\n') #ff.write(url '\n') except: url = '' #ff.write('\n') if __name__ == '__main__': getfromBaidu()

评论

发表评论必须先登陆, 您可以 登陆 或者 注册新账号 !


在线咨询: 问题反馈
客服QQ:174666394

有问题请留言,看到后及时答复