全文检索引擎 1、Sphinx 1.1.Sphinx是什么
Sphinx是由俄罗斯人Andrew Aksyonoff开发的一个全文检索引擎。意图为其他应用提供高速、低空间占用、高结果 相关度的全文搜索功能。Sphinx可以非常容易的与SQL数据库和脚本语言集成。当前系统内置MySQL和PostgreSQL 数据库数据源的支持,也支持从标准输入读取特定格式 的XML数据。通过修改源代码,用户可以自行增加新的数据源(例如:其他类型的DBMS 的原生支持)
Official APIs for PHP, Python, Java, Ruby, pure C are included in Sphinx distribution
1.2.Sphinx的特性
高速的建立索引(在当代CPU上,峰值性能可达到10 MB/秒);
高性能的搜索(在2 – 4GB 的文本数据上,平均每次检索响应时间小于0.1秒);
可处理海量数据(目前已知可以处理超过100 GB的文本数据, 在单一CPU的系统上可 处理100 M 文档);
Xapian is an Open Source Search Engine Library, released under the GPL. It's written in C++, with bindings to allow use from Perl, Python, PHP, Java, Tcl, C# and Ruby (so far!)
Xapian is a highly adaptable toolkit which allows developers to easily add advanced indexing and search facilities to their own applications. It supports the Probabilistic Information Retrieval model and also supports a rich set of boolean query operators.
爬虫 1、Scrapy 1.1、What is Scrapy
Scrapy is a fast high-level screen scraping and web crawling framework, used to crawl websites and extract structured data from their pages. It can be used for a wide range of purposes, from data mining to monitoring and automated testing. 1.2、Scrapy Features
Simple
Scrapy was designed with simplicity in mind, by providing the features you need without getting in your way
Productive
Just write the rules to extract the data from web pages and let Scrapy crawl the entire web site for you
Fast Scrapy is used in production crawlers to completely scrape more than 500 retailer sites daily, all in one server Extensible Scrapy was designed with extensibility in mind and so it provides several mechanisms to plug new code without having to touch the framework core Portable Scrapy runs on Linux, Windows, Mac and BSD Open Source and 100% Python Scrapy is completely written in Python, which makes it very easy to hack Well-tested Scrapy has an extensive test suite with very good code coverage Html处理 1、Beautiful Soup
Beautiful Soup 是用Python写的一个HTML/XML的解析器,它可以很好的处理不规范标记并生成剖析树(parse tree)。它提供简单又常用的导航(navigating),搜索以及修改剖析树的操作。它可以大大节省你的编程时间。 对于Ruby,使用Rubyful Soup。
与web站点交互 1、mechanize
Stateful programmatic web browsing in Python, after Andy Lester’s Perl module
mechanize.Browser and mechanize.UserAgentBase implement the interface of urllib2.OpenerDirector, so:
any URL can be opened, not just http:
mechanize.UserAgentBase offers easy dynamic configuration of user-agent features like protocol, cookie, redirection and robots.txt handling, without having to make a new OpenerDirector each time, e.g. by calling build_opener().
Easy HTML form filling.
Convenient link parsing and following.
Browser history (.back() and .reload() methods).
The Referer HTTP header is added properly (optional).