设为首页 收藏本站
查看: 870|回复: 0

[经验分享] Kafka配置

[复制链接]

尚未签到

发表于 2017-5-23 14:57:22 | 显示全部楼层 |阅读模式
下载kafka并解压
tar xzf kafka_2.8.0-0.8.1.1.tgz

首先开启zookeeper服务:
./bin/zookeeper-server-start.sh config/zookeeper.properties &

然后开启一个broker:
./bin/kafka-server-start.sh  config/server.properties &

开启一个producer
bin/kafka-console-producer.sh --broker-list localhost:9092 --topic test
开启一个consumer
./bin/kafka-console-consumer.sh --zookeeper localhost:2181 --topic test

查看所有topic:
./bin/kafka-topics.sh --list --zookeeper  localhost:2181

API:

// Here are examples of using the producer API - kafka.producer.Producer<T> -
// First, start a local instance of the zookeeper server
./bin/zookeeper-server-start.sh config/zookeeper.properties
// Next, start a kafka broker
./bin/kafka-server-start.sh config/server.properties
// Now, create the producer with all configuration defaults and use zookeeper based broker discovery.
import java.util.Arrays;
import java.util.List;
import java.util.Properties;
import kafka.javaapi.producer.SyncProducer;
import kafka.javaapi.message.ByteBufferMessageSet;
import kafka.message.Message;
import kafka.producer.SyncProducerConfig;
...
Properties props = new Properties();
props.put(“zk.connect”, “127.0.0.1:2181”);
props.put("serializer.class", "kafka.serializer.StringEncoder");
ProducerConfig config = new ProducerConfig(props);
Producer<String, String> producer = new Producer<String, String>(config);
// Send a single message
// The message is sent to a randomly selected partition registered in ZK
ProducerData<String, String> data = new ProducerData<String, String>("test-topic", "test-message");
producer.send(data);
//--------------Send multiple messages to multiple topics in one request---------------------
List<String> messages = new java.util.ArrayList<String>();
messages.add("test-message1");
messages.add("test-message2");
ProducerData<String, String> data1 = new ProducerData<String, String>("test-topic1", messages);
ProducerData<String, String> data2 = new ProducerData<String, String>("test-topic2", messages);
List<ProducerData<String, String>> dataForMultipleTopics = new ArrayList<ProducerData<String, String>>();
dataForMultipleTopics.add(data1);
dataForMultipleTopics.add(data2);
producer.send(dataForMultipleTopics);
//------------Send a message with a partition key. Messages with the same key are sent to the same partition-------------
ProducerData<String, String> data = new ProducerData<String, String>("test-topic", "test-key", "test-message");
producer.send(data);
//-------------Use your custom partitioner--------------------
//If you are using zookeeper based broker discovery, kafka.producer.Producer<T> routes your data to a particular broker partition based on a kafka.producer.Partitioner<T>, specified through the partitioner.class config parameter. It defaults to kafka.producer.DefaultPartitioner. If you don't supply a partition key, then it sends each request to a random broker partition.
class MemberIdPartitioner extends Partitioner[MemberIdLocation] {
def partition(data: MemberIdLocation, numPartitions: Int): Int = {
(data.location.hashCode % numPartitions)
}
}
// create the producer config to plug in the above partitioner
Properties props = new Properties();
props.put(“zk.connect”, “127.0.0.1:2181”);
props.put("serializer.class", "kafka.serializer.StringEncoder");
props.put("partitioner.class", "xyz.MemberIdPartitioner");
ProducerConfig config = new ProducerConfig(props);
Producer<String, String> producer = new Producer<String, String>(config);

//-----------------Use custom Encoder-----------------------
// The producer takes in a required config parameter serializer.class that specifies an Encoder<T> to convert T to a Kafka Message. Default is the no-op kafka.serializer.DefaultEncoder. Here is an example of a custom Encoder -
class TrackingDataSerializer extends Encoder<TrackingData> {
// Say you want to use your own custom Avro encoding
CustomAvroEncoder avroEncoder = new CustomAvroEncoder();
def toMessage(event: TrackingData):Message = {
new Message(avroEncoder.getBytes(event));
}
}
// If you want to use the above Encoder, pass it in to the "serializer.class" config parameter
Properties props = new Properties();
props.put("serializer.class", "xyz.TrackingDataSerializer");
// Using static list of brokers, instead of zookeeper based broker discovery
// Some applications would rather not depend on zookeeper. In that case, the config parameter broker.list can be used to specify the list of all brokers in the Kafka cluster.- the list of all brokers in your Kafka cluster in the following format - broker_id1:host1:port1, broker_id2:host2:port2...
// you can stop the zookeeper instance as it is no longer required
./bin/zookeeper-server-stop.sh
// create the producer config object
Properties props = new Properties();
props.put(“broker.list”, “0:localhost:9092”);
props.put("serializer.class", "kafka.serializer.StringEncoder");
ProducerConfig config = new ProducerConfig(props);
// send a message using default partitioner
Producer<String, String> producer = new Producer<String, String>(config);
List<String> messages = new java.util.ArrayList<String>();
messages.add("test-message");
ProducerData<String, String> data = new ProducerData<String, String>("test-topic", messages);
producer.send(data);
//-------------------Use the asynchronous producer along with GZIP compression. This buffers writes in memory until either batch.size or queue.time is reached. After that, data is sent to the Kafka brokers--------------------
Properties props = new Properties();
props.put("zk.connect"‚ "127.0.0.1:2181");
props.put("serializer.class", "kafka.serializer.StringEncoder");
props.put("producer.type", "async");
props.put("compression.codec", "1");
ProducerConfig config = new ProducerConfig(props);
Producer<String, String> producer = new Producer<String, String>(config);
ProducerData<String, String> data = new ProducerData<String, String>("test-topic", "test-message");
producer.send(data);
// Finally, the producer should be closed, through
producer.close();
//----------------Log4j appender-------------------
// Data can also be produced to a Kafka server in the form of a log4j appender. In this way, minimal code needs to be written in order to send some data across to the Kafka server. Here is an example of how to use the Kafka Log4j appender - Start by defining the Kafka appender in your log4j.properties file.
// define the kafka log4j appender config parameters
log4j.appender.KAFKA=kafka.producer.KafkaLog4jAppender
// REQUIRED: set the hostname of the kafka server
log4j.appender.KAFKA.Host=localhost
// REQUIRED: set the port on which the Kafka server is listening for connections
log4j.appender.KAFKA.Port=9092
// REQUIRED: the topic under which the logger messages are to be posted
log4j.appender.KAFKA.Topic=test
// the serializer to be used to turn an object into a Kafka message. Defaults to kafka.producer.DefaultStringEncoder
log4j.appender.KAFKA.Serializer=kafka.test.AppenderStringSerializer
// do not set the above KAFKA appender as the root appender
log4j.rootLogger=INFO
// set the logger for your package to be the KAFKA appender
log4j.logger.your.test.package=INFO, KAFKA
Data can be sent using a log4j appender as follows -
Logger logger = Logger.getLogger([your.test.class])
logger.info("message from log4j appender");
//If your log4j appender fails to send messages, please verify that the correct log4j properties file is being used. You can add -Dlog4j.debug=true to your VM parameters to verify this.

//-------------------Consumer Code-----------------------------
// The consumer code is slightly more complex as it enables multithreaded consumption:
// specify some consumer properties
Properties props = new Properties();
props.put("zk.connect", "localhost:2181");
props.put("zk.connectiontimeout.ms", "1000000");
props.put("groupid", "test_group");
// Create the connection to the cluster
ConsumerConfig consumerConfig = new ConsumerConfig(props);
ConsumerConnector consumerConnector = Consumer.createJavaConsumerConnector(consumerConfig);
// create 4 partitions of the stream for topic “test”, to allow 4 threads to consume
Map<String, List<KafkaStream<Message>>> topicMessageStreams =
consumerConnector.createMessageStreams(ImmutableMap.of("test", 4));
List<KafkaStream<Message>> streams = topicMessageStreams.get("test");
// create list of 4 threads to consume from each of the partitions
ExecutorService executor = Executors.newFixedThreadPool(4);
// consume the messages in the threads
for(final KafkaStream<Message> stream: streams) {
executor.submit(new Runnable() {
public void run() {
for(MessageAndMetadata msgAndMetadata: stream) {
// process message (msgAndMetadata.message())
}
}
});
}
//------------------------Hadoop Consumer---------------------
// Providing a horizontally scalable solution for aggregating and loading data into Hadoop was one of our basic use cases. To support this use case, we provide a Hadoop-based consumer which spawns off many map tasks to pull data from the Kafka cluster in parallel. This provides extremely fast pull-based Hadoop data load capabilities (we were able to fully saturate the network with only a handful of Kafka servers).
// Usage information on the hadoop consumer can be found here.
//---------------------- Simple Consumer----------------------------
// Kafka has a lower-level consumer api for reading message chunks directly from servers. Under most circumstances this should not be needed. But just in case, it's usage is as follows:
import kafka.api.FetchRequest;
import kafka.javaapi.consumer.SimpleConsumer;
import kafka.javaapi.message.ByteBufferMessageSet;
import kafka.message.Message;
import kafka.message.MessageSet;
import kafka.utils.Utils;
...
// create a consumer to connect to the kafka server running on localhost, port 9092, socket timeout of 10 secs, socket receive buffer of ~1MB
SimpleConsumer consumer = new SimpleConsumer("127.0.0.1", 9092, 10000, 1024000);
long offset = 0;
while (true) {
// create a fetch request for topic “test”, partition 0, current offset, and fetch size of 1MB
FetchRequest fetchRequest = new FetchRequest("test", 0, offset, 1000000);
// get the message set from the consumer and print them out
ByteBufferMessageSet messages = consumer.fetch(fetchRequest);
for(MessageAndOffset msg : messages) {
System.out.println("consumed: " + Utils.toString(msg.message.payload(), "UTF-8"));
// advance the offset after consuming each message
offset = msg.offset;
}
}


http://my.oschina.net/ielts0909/blog/100645

运维网声明 1、欢迎大家加入本站运维交流群:群②:261659950 群⑤:202807635 群⑦870801961 群⑧679858003
2、本站所有主题由该帖子作者发表,该帖子作者与运维网享有帖子相关版权
3、所有作品的著作权均归原作者享有,请您和我们一样尊重他人的著作权等合法权益。如果您对作品感到满意,请购买正版
4、禁止制作、复制、发布和传播具有反动、淫秽、色情、暴力、凶杀等内容的信息,一经发现立即删除。若您因此触犯法律,一切后果自负,我们对此不承担任何责任
5、所有资源均系网友上传或者通过网络收集,我们仅提供一个展示、介绍、观摩学习的平台,我们不对其内容的准确性、可靠性、正当性、安全性、合法性等负责,亦不承担任何法律责任
6、所有作品仅供您个人学习、研究或欣赏,不得用于商业或者其他用途,否则,一切后果均由您自己承担,我们对此不承担任何法律责任
7、如涉及侵犯版权等问题,请您及时通知我们,我们将立即采取措施予以解决
8、联系人Email:admin@iyunv.com 网址:www.yunweiku.com

所有资源均系网友上传或者通过网络收集,我们仅提供一个展示、介绍、观摩学习的平台,我们不对其承担任何法律责任,如涉及侵犯版权等问题,请您及时通知我们,我们将立即处理,联系人Email:kefu@iyunv.com,QQ:1061981298 本贴地址:https://www.yunweiku.com/thread-379780-1-1.html 上篇帖子: 【Kafka一】Kafka入门 下篇帖子: Kafka(二) -- Kafka 用在哪
您需要登录后才可以回帖 登录 | 立即注册

本版积分规则

扫码加入运维网微信交流群X

扫码加入运维网微信交流群

扫描二维码加入运维网微信交流群,最新一手资源尽在官方微信交流群!快快加入我们吧...

扫描微信二维码查看详情

客服E-mail:kefu@iyunv.com 客服QQ:1061981298


QQ群⑦:运维网交流群⑦ QQ群⑧:运维网交流群⑧ k8s群:运维网kubernetes交流群


提醒:禁止发布任何违反国家法律、法规的言论与图片等内容;本站内容均来自个人观点与网络等信息,非本站认同之观点.


本站大部分资源是网友从网上搜集分享而来,其版权均归原作者及其网站所有,我们尊重他人的合法权益,如有内容侵犯您的合法权益,请及时与我们联系进行核实删除!



合作伙伴: 青云cloud

快速回复 返回顶部 返回列表