formatuu 发表于 2019-1-31 09:48:28

filebeat读取nginx日志并写入kafka

  filebeat写入kafka的配置:

filebeat.inputs:
- type: log
paths:
- /tmp/access.log
tags: ["nginx-test"]
fields:
type: "nginx-test"
log_topic: "nginxmessages"
fields_under_root: true
processors:
- drop_fields:
fields: ["beat","input","source","offset"]
name: 10.10.5.119
output.kafka:
enabled: true
hosts: ["10.78.1.85:9092","10.78.1.87:9092","10.78.1.71:9092"]
topic: "%{}"
partition.round_robin:
reachable_only: true
worker: 2
required_acks: 1
compression: gzip
max_message_bytes: 10000000

  logstash从kafka中读取的配置:

input {
kafka {
bootstrap_servers => "10.78.1.85:9092,10.78.1.87:9092,10.78.1.71:9092"
topics => ["nginxmessages"]
codec => "json"
}
}


页: [1]
查看完整版本: filebeat读取nginx日志并写入kafka