Apache Kafka 0.10.1.0 发布,大量更新
欢迎加入运维网交流群:263444886http://onexin.iyunv.com/source/plugin/onexin_bigdata/https://my.oschina.net/img/hot3.pnghttp://onexin.iyunv.com/source/plugin/onexin_bigdata/https://static.oschina.net/uploads/logo/kafka_ymyOC.png Apache Kafka 0.10.1.0 发布了,该版本更新了大量内容,主要改进如下:
新特性
[*] - Add a throttling option to the Kafka replication tool
[*] - Allow console consumer to consume from particular partitions when new consumer is used.
[*] - support quota based on authenticated user name
[*] - Unify store and downstream caching in streams
[*] - Add functions to print stream topologies
[*] - Queryable state for Kafka Streams
[*] - Change cleanup.policy config to accept a list of valid policies
[*]
- Cluster>
改进
[*] - Allow automatic socket.send.buffer from operating system
[*] - Make log compaction point configurable
[*]
- Bound fetch response>
[*] - Make DynamicConfigManager to use the ZkNodeChangeNotificationListener introduced as part of KAFKA-2211
[*] - Update outdated dependencies
[*] - ConsumerGroupCommand should tell whether group is actually dead
[*] - Improve message of stop scripts when no processes are running
[*] - Change tools to use new consumer if zookeeper is not specified
[*] - Remove beta from new consumer documentation
[*] - Add new consumer metrics documentation
[*] - Add capability to specify replication compact option for stream store
[*] - Use Boolean protocol type for StopReplicaRequest delete_partitions
[*]
- Make Java client>
[*] - Add file descriptor recommendation to ops guide
[*] - Clean-up website documentation when it comes to clients
[*] - Update protocol page on website to explain how KIP-35 should be used
[*] - Allow configuration of MetricsReporter subclasses
[*] - Add an auto accept option to kafka-acls.sh
[*] - Add consumer-property to console tools consumer (similar to --producer-property)
[*] - "BOOSTRAP_SERVERS_DOC" typo in CommonClientConfigs
[*] - Add approximateNumEntries() to the StateStore interface for metrics reporting
[*] - Set broker state as running after publishing to ZooKeeper
[*] - Log.loadSegments() should log the message in exception
[*] - Code style issues in Kafka
[*] - Replace all pattern match on boolean value by if/elase block.
发行说明和完整更新内容
下载地址
源代码
https://www.apache.org/dyn/closer.cgi?path=/kafka/0.10.1.0/kafka-0.10.1.0-src.tgz
二进制文件
https://www.apache.org/dyn/closer.cgi?path=/kafka/0.10.1.0/kafka_2.11-0.10.1.0.tgz
https://www.apache.org/dyn/closer.cgi?path=/kafka/0.10.1.0/kafka_2.10-0.10.1.0.tgz
卡夫卡的目的是提供一个发布订阅解决方案,它可以处理消费者规模的网站中的所有动作流数据。 这种动作(网页浏览,搜索和其他用户的行动)是在现代网络上的许多社会功能的一个关键因素。 这些数据通常是由于吞吐量的要求而通过处理日志和日志聚合来解决。 对于像Hadoop的一样的日志数据和离线分析系统,但又要求实时处理的限制,这是一个可行的解决方案。kafka的目的是通过Hadoop的并行加载机制来统一线上和离线的消息处理,也是为了通过集群机来提供实时的消费。
页:
[1]