由于我国网络服务商不一定保持 HTTP Content-type header 与 meta charset 一致,比如新浪新闻、和讯、网易新闻的 html 里都会写明 meta charset 是 gb2312,但新浪新闻的 HTTP Content-type header 里却只输出:Content-Type: text/html ,并没有给出 charset 参数。网易新闻则 HTTP Header 中指定 GBK ,而 HTML 里却指定 GB2312 。
国外的一些服务探测我国网站时,容易因此得到乱码,如我的文章《Yahoo! Pipe的charset问题之解决方法》所说的。
这样带来的一个问题就是: 当 HTTP Content-type header 与 meta charset 不一致时,到底采信谁的声明?
当然也可以用 chardet 来检测内容,但 chardet 非常消耗资源,在网络爬虫中频繁调用 chardet 吞吐大量 html 字符串,会降低抓取效率。
BeautifulSoup 自动探测机制
BeautifulSoup 会自动判断页面编码,如果判断不出来就调用 chardet 探测。它的探测顺序是:
Beautiful Soup tries the following encodings, in order of priority, to turn your document into Unicode:
An encoding you pass in as the fromEncoding argument to the soup constructor.
An encoding discovered in the document itself: for instance, in an XML declaration or (for HTML documents) an http-equiv META tag. If Beautiful Soup finds this kind of encoding within the document, it parses the document again from the beginning and gives the new encoding a try. The only exception is if you explicitly specified an encoding, and that encoding actually worked: then it will ignore any encoding it finds in the document.
An encoding sniffed by looking at the first few bytes of the file. If an encoding is detected at this stage, it will be one of the UTF-* encodings, EBCDIC, or ASCII.
An encoding sniffed by the chardet library, if you have it installed.
UTF-8
Windows-1252
BeautifulSoup 优先用 meta charset 指示的编码进行探测,但未必对。
举一个异常的例子,http://www.miniclip.com/games/cn/ ,它的 HTTP Content-type header 是 utf-8,而 meta charset 却是 iso-8859-1,实际上它的编码是 utf-8 。
对于这种情况,怎么办?
可以让 BeautifulSoup 强制按照 HTTP Content-type 声明的编码做转换:
from BeautifulSoup import BeautifulSoup
from urllib import urlopen
response=urlopen('http://www.miniclip.com/games/cn/') charset=BeautifulSoup.CHARSET_RE.search(response.headers['content-type'])
charset=charset and charset.group(3) or None
page=BeautifulSoup(response.read(),fromEncoding=charset)