mahonglin123456 发表于 2015-4-22 10:07:24

Python网页抓取urllib,urllib2,httplib[1]

  Python网页抓取urllib,urllib2,httplib      

分类:            Python笔记2012-03-17 16:0278人阅读评论(0)收藏举报
  前阶段使用到ftp,写了个工具脚本http://blog.iyunv.com/wklken/article/details/7059423
  最近需要抓网页,看了下python抓取方式
  
  需求:
  抓取网页,解析获取内容
  
  涉及库:【重点urllib2】
  urllib   http://docs.python.org/library/urllib.html
  urllib2http://docs.python.org/library/urllib2.html
  httplib   http://docs.python.org/library/httplib.html
  
  
  使用urllib:
  1.      抓取网页信息
  
  urllib.urlopen(url[, data[, proxies]]) :
  url: 表示远程数据的路径
  data: 以post方式提交到url的数据
  proxies:用于设置代理
  
  urlopen返回对象提供方法:
  -         read() , readline() ,readlines() , fileno() , close() :这些方法的使用方式与文件对象完全一样
  -         info():返回一个httplib.HTTPMessage对象,表示远程服务器返回的头信息
  -         getcode():返回Http状态码。如果是http请求,200请求成功完成;404网址未找到
  -         geturl():返回请求的url
  
  使用:



view plaincopyprint?
[*]#!/usr/bin/python
[*]# -*- coding:utf-8 -*-
[*]# urllib_test.py
[*]# author:wklken
[*]# 2012-03-17wklken#yeah.net   
[*]
[*]import os
[*]import urllib
[*]url = "http://www.siteurl.com"
[*]
[*]def use_urllib():
[*]import urllib, httplib
[*]httplib.HTTPConnection.debuglevel = 1   
[*]page = urllib.urlopen(url)
[*]print"status:", page.getcode() #200请求成功,404
[*]print"url:", page.geturl()
[*]print"head_info:\n",page.info()
[*]print"Content len:", len(page.read())
  #!/usr/bin/python# -*- coding:utf-8 -*-# urllib_test.py# author:wklken# 2012-03-17wklken#yeah.net import osimport urlliburl = "http://www.siteurl.com"def use_urllib():import urllib, httplibhttplib.HTTPConnection.debuglevel = 1 page = urllib.urlopen(url)print "status:", page.getcode() #200请求成功,404print "url:", page.geturl()print "head_info:\n",page.info()print "Content len:", len(page.read())
  
  附带的其他方法:(主要是url编码解码)
  -         urllib.quote(string[, safe]):对字符串进行编码。参数safe指定了不需要编码的字符
  -         urllib.unquote(string) :对字符串进行解码
  -          urllib.quote_plus(string [ , safe ] ) :与urllib.quote类似,但这个方法用'+'来替换' ',而quote用'%20'来代替' '
  -          urllib.unquote_plus(string ) :对字符串进行解码
  
  -         urllib.urlencode(query[, doseq]):将dict或者包含两个元素的元组列表转换成url参数。例如 字典{'name': 'wklken', 'pwd': '123'}将被转换为"name=wklken&pwd=123"
  -         urllib.pathname2url(path):将本地路径转换成url路径
  -          urllib.url2pathname(path):将url路径转换成本地路径
  
  使用:



view plaincopyprint?
[*]def urllib_other_functions():
[*]astr = urllib.quote('this is "K"')
[*]print astr
[*]print urllib.unquote(astr)
[*]bstr = urllib.quote_plus('this is "K"')
[*]print bstr
[*]print urllib.unquote(bstr)
[*]
[*]params = {"a":"1", "b":"2"}
[*]print urllib.urlencode(params)
[*]
[*]l2u = urllib.pathname2url(r'd:\a\test.py')
[*]print l2u   
[*]print urllib.url2pathname(l2u)
  def urllib_other_functions():astr = urllib.quote('this is "K"')print astrprint urllib.unquote(astr)bstr = urllib.quote_plus('this is "K"')print bstrprint urllib.unquote(bstr)params = {"a":"1", "b":"2"}print urllib.urlencode(params)l2u = urllib.pathname2url(r'd:\a\test.py')print l2u print urllib.url2pathname(l2u)
  
  2.下载远程数据
  
  urlretrieve方法直接将远程数据下载到本地
  urllib.urlretrieve(url[, filename[, reporthook[, data]]]):
  
  filename指定保存到本地的路径(若未指定该,urllib生成一个临时文件保存数据)
  reporthook回调函数,当连接上服务器、以及相应的数据块传输完毕的时候会触发该回调
  data指post到服务器的数据
  
  该方法返回一个包含两个元素的元组(filename, headers),filename表示保存到本地的路径,header表示服务器的响应头。



view plaincopyprint?
[*]defcallback_f(downloaded_size, block_size, romote_total_size):
[*]per = 100.0 * downloaded_size * block_size / romote_total_size
[*]if per > 100:
[*]    per = 100   
[*]print"%.2f%%"% per   
[*]
[*]def use_urllib_retrieve():
[*]import urllib
[*]local = os.path.join(os.path.abspath("./"), "a.html")
[*]print local
[*]urllib.urlretrieve(url,local,callback_f)
  defcallback_f(downloaded_size, block_size, romote_total_size):per = 100.0 * downloaded_size * block_size / romote_total_sizeif per > 100:per = 100 print "%.2f%%"% per def use_urllib_retrieve():import urlliblocal = os.path.join(os.path.abspath("./"), "a.html")print localurllib.urlretrieve(url,local,callback_f)
  下一篇:httplib
  转载请注明出处:http://blog.iyunv.com/wklken
  
  Python网页抓取urllib,urllib2,httplib      

分类:            Python笔记2012-03-17 16:0986人阅读评论(0)收藏举报
上一篇使用urllib抓取Python网页抓取urllib,urllib2,httplib


  使用httplib抓取:
  表示一次与服务器之间的交互,即请求/响应
  httplib.HTTPConnection ( host [ , port [ ,strict [ , timeout ]]] )
  host表示服务器主机
  port为端口号,默认值为80
  strict的 默认值为false, 表示在无法解析服务器返回的状态行时(status line) (比较典型的状态行如: HTTP/1.0 200 OK ),是否抛BadStatusLine 异常
  可选参数timeout 表示超时时间。
  
  HTTPConnection提供的方法:
  - HTTPConnection.request ( method , url [ ,body [ , headers ]] )
  调用request 方法会向服务器发送一次请求
  method 表示请求的方法,常用有方法有get 和post ;
  url 表示请求的资源的url ;
  body 表示提交到服务器的数据,必须是字符串(如果method是”post”,则可以把body 理解为html 表单中的数据);
  headers 表示请求的http 头。
  - HTTPConnection.getresponse ()
  获取Http 响应。返回的对象是HTTPResponse 的实例,关于HTTPResponse 在下面会讲解。
   - HTTPConnection.connect ()
  连接到Http 服务器。
   - HTTPConnection.close ()
  关闭与服务器的连接。
   - HTTPConnection.set_debuglevel ( level )
  设置高度的级别。参数level 的默认值为0 ,表示不输出任何调试信息。
  
  httplib.HTTPResponse
  -HTTPResponse表示服务器对客户端请求的响应。往往通过调用HTTPConnection.getresponse()来创建,它有如下方法和属性:
  -HTTPResponse.read()
  获取响应的消息体。如果请求的是一个普通的网页,那么该方法返回的是页面的html。可选参数amt表示从响应流中读取指定字节的数据。
   -HTTPResponse.getheader(name[, default])
  获取响应头。Name表示头域(header field)名,可选参数default在头域名不存在的情况下作为默认值返回。
   -HTTPResponse.getheaders()
  以列表的形式返回所有的头信息。
   - HTTPResponse.msg
  获取所有的响应头信息。
   -HTTPResponse.version
  获取服务器所使用的http协议版本。11表示http/1.1;10表示http/1.0。
   -HTTPResponse.status
  获取响应的状态码。如:200表示请求成功。
   -HTTPResponse.reason
  返回服务器处理请求的结果说明。一般为”OK”
  
  使用例子:


view plaincopyprint?

[*]#!/usr/bin/python
[*]# -*- coding:utf-8 -*-
[*]# httplib_test.py
[*]# author:wklken
[*]# 2012-03-17wklken#yeah.net   
[*]def use_httplib():
[*]import httplib
[*]conn = httplib.HTTPConnection("www.baidu.com")
[*]i_headers = {"User-Agent": "Mozilla/5.0 (Windows; U; Windows NT 5.1; zh-CN; rv:1.9.1) Gecko/20090624 Firefox/3.5",
[*]             "Accept": "text/plain"}
[*]conn.request("GET", "/", headers = i_headers)
[*]r1 = conn.getresponse()
[*]print"version:", r1.version
[*]print"reason:", r1.reason
[*]print"status:", r1.status
[*]print"msg:", r1.msg
[*]print"headers:", r1.getheaders()
[*]data = r1.read()
[*]print len(data)
[*]conn.close()
[*]
[*]if __name__ == "__main__":
[*]use_httplib()


      Python网页抓取urllib,urllib2,httplib      

分类:            Python笔记2012-03-17 16:2180人阅读评论(0)收藏举报
  
  使用urllib2,太强大了
  试了下用代理登陆拉取cookie,跳转抓图片......
  文档:http://docs.python.org/library/urllib2.html
  
  直接上demo代码了
  包括:直接拉取,使用Reuqest(post/get),使用代理,cookie,跳转处理
  



view plaincopyprint?
[*]#!/usr/bin/python
[*]# -*- coding:utf-8 -*-
[*]# urllib2_test.py
[*]# author: wklken
[*]# 2012-03-17 wklken@yeah.net
[*]
[*]
[*]import urllib,urllib2,cookielib,socket
[*]
[*]url = "http://www.testurl....."#change yourself
[*]#最简单方式
[*]def use_urllib2():
[*]try:
[*]    f = urllib2.urlopen(url, timeout=5).read()
[*]except urllib2.URLError, e:
[*]    print e.reason
[*]print len(f)
[*]
[*]#使用Request
[*]def get_request():
[*]#可以设置超时
[*]socket.setdefaulttimeout(5)
[*]#可以加入参数[无参数,使用get,以下这种方式,使用post]
[*]params = {"wd":"a","b":"2"}
[*]#可以加入请求头信息,以便识别
[*]i_headers = {"User-Agent": "Mozilla/5.0 (Windows; U; Windows NT 5.1; zh-CN; rv:1.9.1) Gecko/20090624 Firefox/3.5",
[*]             "Accept": "text/plain"}
[*]#use post,have some params post to server,if not support ,will throw exception
[*]#req = urllib2.Request(url, data=urllib.urlencode(params), headers=i_headers)
[*]req = urllib2.Request(url, headers=i_headers)
[*]
[*]#创建request后,还可以进行其他添加,若是key重复,后者生效
[*]#request.add_header('Accept','application/json')
[*]#可以指定提交方式
[*]#request.get_method = lambda: 'PUT'
[*]try:
[*]    page = urllib2.urlopen(req)
[*]    print len(page.read())
[*]    #like get
[*]    #url_params = urllib.urlencode({"a":"1", "b":"2"})
[*]    #final_url = url + "?" + url_params
[*]    #print final_url
[*]    #data = urllib2.urlopen(final_url).read()
[*]    #print "Method:get ", len(data)
[*]except urllib2.HTTPError, e:
[*]    print"Error Code:", e.code
[*]except urllib2.URLError, e:
[*]    print"Error Reason:", e.reason
[*]
[*]def use_proxy():
[*]enable_proxy = False
[*]proxy_handler = urllib2.ProxyHandler({"http":"http://proxyurlXXXX.com:8080"})
[*]null_proxy_handler = urllib2.ProxyHandler({})
[*]if enable_proxy:
[*]    opener = urllib2.build_opener(proxy_handler, urllib2.HTTPHandler)
[*]else:
[*]    opener = urllib2.build_opener(null_proxy_handler, urllib2.HTTPHandler)
[*]#此句设置urllib2的全局opener
[*]urllib2.install_opener(opener)
[*]content = urllib2.urlopen(url).read()
[*]print"proxy len:",len(content)
[*]
[*]class NoExceptionCookieProcesser(urllib2.HTTPCookieProcessor):
[*]def http_error_403(self, req, fp, code, msg, hdrs):
[*]    return fp
[*]def http_error_400(self, req, fp, code, msg, hdrs):
[*]    return fp
[*]def http_error_500(self, req, fp, code, msg, hdrs):
[*]    return fp
[*]
[*]def hand_cookie():
[*]cookie = cookielib.CookieJar()
[*]#cookie_handler = urllib2.HTTPCookieProcessor(cookie)
[*]#after add error exception handler
[*]cookie_handler = NoExceptionCookieProcesser(cookie)
[*]opener = urllib2.build_opener(cookie_handler, urllib2.HTTPHandler)
[*]url_login = "https://www.yourwebsite/?login"
[*]params = {"username":"user","password":"111111"}
[*]opener.open(url_login, urllib.urlencode(params))
[*]for item in cookie:
[*]    print item.name,item.value
[*]#urllib2.install_opener(opener)
[*]#content = urllib2.urlopen(url).read()
[*]#print len(content)
[*]#得到重定向 N 次以后最后页面URL
[*]def get_request_direct():
[*]import httplib
[*]httplib.HTTPConnection.debuglevel = 1
[*]request = urllib2.Request("http://www.google.com")
[*]request.add_header("Accept", "text/html,*/*")
[*]request.add_header("Connection", "Keep-Alive")
[*]opener = urllib2.build_opener()
[*]f = opener.open(request)
[*]print f.url
[*]print f.headers.dict
[*]print len(f.read())
[*]
[*]if __name__ == "__main__":
[*]use_urllib2()
[*]get_request()
[*]get_request_direct()
[*]use_proxy()
[*]hand_cookie()
#!/usr/bin/python# -*- coding:utf-8 -*-# urllib2_test.py# author: wklken# 2012-03-17 wklken@yeah.netimport urllib,urllib2,cookielib,socketurl = "http://www.testurl....." #change yourself#最简单方式def use_urllib2():try:f = urllib2.urlopen(url, timeout=5).read()except urllib2.URLError, e:print e.reasonprint len(f)#使用Requestdef get_request():#可以设置超时socket.setdefaulttimeout(5)#可以加入参数[无参数,使用get,以下这种方式,使用post]params = {"wd":"a","b":"2"}#可以加入请求头信息,以便识别i_headers = {"User-Agent": "Mozilla/5.0 (Windows; U; Windows NT 5.1; zh-CN; rv:1.9.1) Gecko/20090624 Firefox/3.5","Accept": "text/plain"}#use post,have some params post to server,if not support ,will throw exception#req = urllib2.Request(url, data=urllib.urlencode(params), headers=i_headers)req = urllib2.Request(url, headers=i_headers)#创建request后,还可以进行其他添加,若是key重复,后者生效#request.add_header('Accept','application/json')#可以指定提交方式#request.get_method = lambda: 'PUT'try:page = urllib2.urlopen(req)print len(page.read())#like get#url_params = urllib.urlencode({"a":"1", "b":"2"})#final_url = url + "?" + url_params#print final_url#data = urllib2.urlopen(final_url).read()#print "Method:get ", len(data)except urllib2.HTTPError, e:print "Error Code:", e.codeexcept urllib2.URLError, e:print "Error Reason:", e.reasondef use_proxy():enable_proxy = Falseproxy_handler = urllib2.ProxyHandler({"http":"http://proxyurlXXXX.com:8080"})null_proxy_handler = urllib2.ProxyHandler({})if enable_proxy:opener = urllib2.build_opener(proxy_handler, urllib2.HTTPHandler)else:opener = urllib2.build_opener(null_proxy_handler, urllib2.HTTPHandler)#此句设置urllib2的全局openerurllib2.install_opener(opener)content = urllib2.urlopen(url).read()print "proxy len:",len(content)class NoExceptionCookieProcesser(urllib2.HTTPCookieProcessor):def http_error_403(self, req, fp, code, msg, hdrs):return fpdef http_error_400(self, req, fp, code, msg, hdrs):return fpdef http_error_500(self, req, fp, code, msg, hdrs):return fpdef hand_cookie():cookie = cookielib.CookieJar()#cookie_handler = urllib2.HTTPCookieProcessor(cookie)#after add error exception handlercookie_handler = NoExceptionCookieProcesser(cookie)opener = urllib2.build_opener(cookie_handler, urllib2.HTTPHandler)url_login = "https://www.yourwebsite/?login"params = {"username":"user","password":"111111"}opener.open(url_login, urllib.urlencode(params))for item in cookie:print item.name,item.value#urllib2.install_opener(opener)#content = urllib2.urlopen(url).read()#print len(content)#得到重定向 N 次以后最后页面URLdef get_request_direct():import httplibhttplib.HTTPConnection.debuglevel = 1request = urllib2.Request("http://www.google.com")request.add_header("Accept", "text/html,*/*")request.add_header("Connection", "Keep-Alive")opener = urllib2.build_opener()f = opener.open(request)print f.urlprint f.headers.dictprint len(f.read())if __name__ == "__main__":use_urllib2()get_request()get_request_direct()use_proxy()hand_cookie()

      Python urllib2递归抓取某个网站下图片      

      分类:            Python笔记2012-03-17 19:5192人阅读评论(0)收藏举报
  
  需求:
  抓取某个网站下图片
  可定义 图片保存路径,最小图片大小域值,遍历深度,是否遍历到外站,抓取并下载图片
  
  使用库:
  urllib   http://docs.python.org/library/urllib.html【下载】
  urllib2http://docs.python.org/library/urllib2.html【抓取】
  urlparsehttp://docs.python.org/library/urlparse.html【url切分用到】
  sgmllibhttp://docs.python.org/library/sgmllib.html【html解析用到】
  
  代码:



view plaincopyprint?
[*]#!/usr/bin/python
[*]# -*- coding:utf-8 -*-
[*]# author: wklken
[*]# 2012-03-17 wklken@yeah.net
[*]#1实现url解析 #2实现图片下载 #3优化重构
[*]#4多线程 尚未加入
[*]
[*]import os,sys,urllib,urllib2,urlparse
[*]from sgmllib import SGMLParser   
[*]
[*]img = []
[*]class URLLister(SGMLParser):
[*]def reset(self):
[*]    SGMLParser.reset(self)
[*]    self.urls=[]
[*]    self.imgs=[]
[*]def start_a(self, attrs):
[*]    href = [ v for k,v in attrs if k=="href"and v.startswith("http")]
[*]    if href:
[*]      self.urls.extend(href)
[*]def start_img(self, attrs):
[*]    src = [ v for k,v in attrs if k=="src"and v.startswith("http") ]
[*]    if src:
[*]      self.imgs.extend(src)
[*]
[*]
[*]def get_url_of_page(url, if_img = False):
[*]urls = []
[*]try:
[*]    f = urllib2.urlopen(url, timeout=1).read()
[*]    url_listen = URLLister()
[*]    url_listen.feed(f)
[*]    if if_img:
[*]      urls.extend(url_listen.imgs)
[*]    else:
[*]      urls.extend(url_listen.urls)
[*]except urllib2.URLError, e:
[*]    print e.reason
[*]return urls
[*]
[*]#递归处理页面
[*]def get_page_html(begin_url, depth, ignore_outer, main_site_domain):
[*]#若是设置排除外站 过滤之
[*]if ignore_outer:
[*]    ifnot main_site_domain in begin_url:
[*]      return
[*]
[*]if depth == 1:
[*]    urls = get_url_of_page(begin_url, True)
[*]    img.extend(urls)
[*]else:
[*]    urls = get_url_of_page(begin_url)
[*]    if urls:
[*]      for url in urls:
[*]      get_page_html(url, depth-1)
[*]
[*]#下载图片
[*]def download_img(save_path, min_size):
[*]print"download begin..."
[*]for im in img:
[*]    filename = im.split("/")[-1]
[*]    dist = os.path.join(save_path, filename)
[*]    #此方式判断图片的大小太浪费了
[*]    #if len(urllib2.urlopen(im).read()) < min_size:
[*]    #continue
[*]    #这种方式先拉头部,应该好多了,不用再下载一次
[*]    connection = urllib2.build_opener().open(urllib2.Request(im))
[*]    if int(connection.headers.dict['content-length']) < min_size:
[*]      continue
[*]    urllib.urlretrieve(im, dist,None)
[*]    print"Done: ", filename
[*]print"download end..."
[*]
[*]if __name__ == "__main__":
[*]#抓取图片首个页面
[*]url = "http://www.baidu.com/"
[*]#图片保存路径
[*]save_path = os.path.abspath("./downlaod")
[*]ifnot os.path.exists(save_path):
[*]    os.mkdir(save_path)
[*]#限制图片最小必须大于此域值单位 B
[*]min_size = 92
[*]#遍历深度
[*]max_depth = 1
[*]#是否只遍历目标站内,即存在外站是否忽略
[*]ignore_outer = True
[*]main_site_domain = urlparse.urlsplit(url).netloc
[*]
[*]get_page_html(url, max_depth, ignore_outer, main_site_domain)
[*]
[*]download_img(save_path, min_size)
  #!/usr/bin/python# -*- coding:utf-8 -*-# author: wklken# 2012-03-17 wklken@yeah.net#1实现url解析 #2实现图片下载 #3优化重构#4多线程 尚未加入import os,sys,urllib,urllib2,urlparsefrom sgmllib import SGMLParser img = []class URLLister(SGMLParser):def reset(self):SGMLParser.reset(self)self.urls=[]self.imgs=[]def start_a(self, attrs):href = [ v for k,v in attrs if k=="href" and v.startswith("http")]if href:self.urls.extend(href)def start_img(self, attrs):src = [ v for k,v in attrs if k=="src" and v.startswith("http") ]if src:self.imgs.extend(src)def get_url_of_page(url, if_img = False):urls = []try:f = urllib2.urlopen(url, timeout=1).read()url_listen = URLLister()url_listen.feed(f)if if_img:urls.extend(url_listen.imgs)else:urls.extend(url_listen.urls)except urllib2.URLError, e:print e.reasonreturn urls#递归处理页面def get_page_html(begin_url, depth, ignore_outer, main_site_domain):#若是设置排除外站 过滤之if ignore_outer:if not main_site_domain in begin_url:returnif depth == 1:urls = get_url_of_page(begin_url, True)img.extend(urls)else:urls = get_url_of_page(begin_url)if urls:for url in urls:get_page_html(url, depth-1)#下载图片def download_img(save_path, min_size):print "download begin..."for im in img:filename = im.split("/")[-1]dist = os.path.join(save_path, filename)#此方式判断图片的大小太浪费了#if len(urllib2.urlopen(im).read()) < min_size:#continue#这种方式先拉头部,应该好多了,不用再下载一次connection = urllib2.build_opener().open(urllib2.Request(im))if int(connection.headers.dict['content-length']) < min_size:continueurllib.urlretrieve(im, dist,None)print "Done: ", filenameprint "download end..."if __name__ == "__main__":#抓取图片首个页面url = "http://www.baidu.com/"#图片保存路径save_path = os.path.abspath("./downlaod")if not os.path.exists(save_path):os.mkdir(save_path)#限制图片最小必须大于此域值单位 Bmin_size = 92#遍历深度max_depth = 1#是否只遍历目标站内,即存在外站是否忽略ignore_outer = Truemain_site_domain = urlparse.urlsplit(url).netlocget_page_html(url, max_depth, ignore_outer, main_site_domain)download_img(save_path, min_size)
  
  
  
  后续可以优化
  1.使用多线程优化下载,目前多层遍历不够速度
  2.使用BeautifulSoup写一个版本
  3.加入图形界面......
  
  
  2012-03-17
  wklken
  
  转载请注明出处:http://blog.iyunv.com/wklken
页: [1]
查看完整版本: Python网页抓取urllib,urllib2,httplib[1]