设为首页 收藏本站
查看: 1745|回复: 0

[经验分享] ELK + kafka + filebeat +kibana

[复制链接]
累计签到:1 天
连续签到:1 天
发表于 2017-9-11 11:06:03 | 显示全部楼层 |阅读模式
架构说明

app-server(filebeat) -> kafka -> logstash -> elasticsearch -> kibana

服务器用途说明

系统基础环境
# cat /etc/redhat-release
CentOS release 6.5 (Final)

# uname -r
2.6.32-431.el6.x86_64


192.168.162.51    logstash01
192.168.162.53    logstash02
192.168.162.55    logstash03
192.168.162.56    logstash04
192.168.162.57    logstash05

192.168.162.58    elasticsearch01
192.168.162.61    elasticsearch02
192.168.162.62    elasticsearch03
192.168.162.63    elasticsearch04
192.168.162.64    elasticsearch05

192.168.162.66    kibana
192.168.128.144   kafka01
192.168.128.145   kafka02
192.168.128.146   kafka03

192.168.138.75    filebeat,weblogic

下载各种需要的软件包

elastic下载地址 为6.0-beta2版本的

kafka下载地址

java下载地址

elasticsearch-6.0.0-beta2.rpm
filebeat-6.0.0-beta2-x86_64.rpm
grafana-4.4.3-1.x86_64.rpm
heartbeat-6.0.0-beta2-x86_64.rpm
influxdb-1.3.5.x86_64.rpm
jdk-8u144-linux-x64.rpm
kafka_2.12-0.11.0.0.tgz
kibana-6.0.0-beta2-x86_64.rpm
logstash-6.0.0-beta2.rpm

部署安装Filebeat
安装Filebeat

在应用服务器上安装filebeat

# yum localinstall filebeat-6.0.0-beta2-x86_64.rpm -y

安装完成之后,filebeat 通过RPM安装的目录:

# ls /usr/share/filebeat/
bin  kibana  module  NOTICE  README.md  scripts

配置文件为

/etc/filebeat/filebeat.yml

配置Filebeat

#=========================== Filebeat prospectors =============================
filebeat.prospectors:
- type: log
  enabled: true
  paths:
    - /data1/logs/apphfpay_8086_domain/apphfpay.yiguanjinrong.yg.*

  multiline.pattern: '^(19|20)\d\d-(0[1-9]|1[012])-(0[1-9]|[12][0-9]|3[01]) [012][0-9]:[0-6][0-9]:[0-6][0-9]'
  multiline.negate: true
  multiline.match: after

filebeat.config.modules:
  path: ${path.config}/modules.d/*.yml
  reload.enabled: false

#----------------------------- Kafka output ---------------------------------
output.kafka:
  hosts: ['192.168.128.144:9092','192.168.128.145:9092','192.168.128.146:9092']
  topic: 'credit'

启动Filebeat,并查看日志有无报错信息

#/etc/init.d/filebeat start

日志文件

/var/log/filebeat/filebeat

安装部署Kafka和Zookeeper

分别设置三台kafka服务器的主机名

# host=kafka01  && hostname $host && echo "192.168.128.144" $host >>/etc/hosts
# host=kafka02  && hostname $host && echo "192.168.128.145" $host >>/etc/hosts
# host=kafka03  && hostname $host && echo "192.168.128.146" $host >>/etc/hosts

安装java

# yum localinstall jdk-8u144-linux-x64.rpm -y

解压kafka压缩包并将压缩包移动到 /usr/local/kafka

# tar fzx kafka_2.12-0.11.0.0.tgz
# mv kafka_2.12-0.11.0.0 /usr/local/kafka

配置Kafka和Zookeeper

# pwd
/usr/local/kafka/config
# ls
connect-console-sink.properties    connect-log4j.properties       server.properties
connect-console-source.properties  connect-standalone.properties  tools-log4j.properties
connect-distributed.properties     consumer.properties            zookeeper.properties
connect-file-sink.properties       log4j.properties
connect-file-source.properties     producer.properties

修改配置文件

# grep -Ev "^$|^#" server.properties

broker.id=1
delete.topic.enable=true
listeners=PLAINTEXT://192.168.128.144:9092
num.network.threads=3
num.io.threads=8
socket.send.buffer.bytes=102400
socket.receive.buffer.bytes=102400
socket.request.max.bytes=104857600
log.dirs=/data1/kafka-logs
num.partitions=12
num.recovery.threads.per.data.dir=1
log.retention.hours=168
log.segment.bytes=1073741824
log.retention.check.interval.ms=300000
zookeeper.connect=zk01.yiguanjinrong.yg:2181,zk02.yiguanjinrong.yg:2181,zk03.yiguanjinrong.yg:2181
zookeeper.connection.timeout.ms=6000


# grep -Ev "^$|^#" consumer.properties

zookeeper.connect=zk01.yiguanjinrong.yg:2181,zk02.yiguanjinrong.yg:2181,zk03.yiguanjinrong.yg:2181
zookeeper.connection.timeout.ms=6000
group.id=test-consumer-group

# grep -Ev "^$|^#" producer.properties
bootstrap.servers=192.168.128.144:9092,192.168.128.145:9092,192.168.128.146:9092
compression.type=none

启动Zookeeper And Kafka

首先检测配置有没有问题

启动zookeeper,查看有无报错

# /usr/local/kafka/bin/kafka-server-start.sh /usr/local/kafka/config/server.properties

启动kafka,查看有无报错

# /usr/local/kafka/bin/zookeeper-server-start.sh /usr/local/kafka/config/zookeeper.properties


如果没有报错,如果需要zookeeper和kafka 那就先启动zookeeper 在启动kafka (当然也可以写一个启动脚本)

# nohup /usr/local/kafka/bin/kafka-server-start.sh /usr/local/kafka/config/server.properties &

# nohup /usr/local/kafka/bin/zookeeper-server-start.sh /usr/local/kafka/config/zookeeper.properties &

检查启动情况 默认开启的端口为 2181(zookeeper) 和 9202(kafka)

创建topic

# bin/kafka-topics.sh --create --zookeeper zk01.yiguanjinrong.yg:2181 --replication-factor 1 --partition 1 --topic test

Created topic "test".

查看创建的topic

# bin/kafka-topics.sh --list --zookeeper zk01.yiguanjinrong.yg:2181

test

模拟客户端发送消息

# bin/kafka-console-producer.sh --broker-list 192.168.128.144:9092 --topic test
确定后 输入一些内容,然后确定

模拟客户端接收信息(如果能正常接收到信息说明kafka部署正常)

# bin/kafka-console-consumer.sh --bootstrap-server 192.168.128.144:9202 --topic test --from-beginning

删除topic

# bin/kafka-topics.sh --delete --zookeeper zk01.yiguanjinrong.yg:2181 --topic test

安装部署Logstash
安装Logstash

# yum localinstall jdk-8u144-linux-x64.rpm -y
# yum localinstall logstash-6.0.0-beta2.rpm -y

logstash的安装目录和配置文件的目录(默认没有配置文件)分别为

# /usr/share/logstash/   安装完成之后,并没有把bin目录添加到环境变量中

# /etc/logstash/conf.d/

Logstash的配置文件信息

# cat /etc/logstash/conf.d/logstash.conf

input {
  kafka {
    bootstrap_servers => "192.168.128.144:9092,192.168.128.145:9092,192.168.128.145:9092"
    topics => ["credit"]
    group_id => "test-consumer-group"
    codec => "plain"
    consumer_threads => 1
    decorate_events => true

  }
}

output {
  elasticsearch {
    hosts => ["192.168.162.58:9200","192.168.162.61:9200","192.168.162.62:9200","192.168.162.63:9200","192.168.162.64:9200"]
    index => "logs-%{+YYYY.MM.dd}"
    workers => 1
  }
}

检查配置文件是否正确

# /usr/share/logstash/bin/logstash -t --path.settings /etc/logstash/  --verbose
Sending Logstash's logs to /var/log/logstash which is now configured via log4j2.properties
Configuration OK

由于logstash 默认没有启动脚本,但是已经给出创建方法

查看脚本使用帮助

# bin/system-install --help

Usage: system-install [OPTIONSFILE] [STARTUPTYPE] [VERSION]

NOTE: These arguments are ordered, and co-dependent

OPTIONSFILE: Full path to a startup.options file
OPTIONSFILE is required if STARTUPTYPE is specified, but otherwise looks first
in /usr/share/logstash/config/startup.options and then /etc/logstash/startup.options
Last match wins

STARTUPTYPE: e.g. sysv, upstart, systemd, etc.
OPTIONSFILE is required to specify a STARTUPTYPE.

VERSION: The specified version of STARTUPTYPE to use.  The default is usually
preferred here, so it can safely be omitted.
Both OPTIONSFILE & STARTUPTYPE are required to specify a VERSION.

# /usr/share/logstash/bin/system-install /etc/logstash/startup.options sysv

创建之后文件为: /etc/init.d/logstash  要注意修改日志目录地址,建议把log放置在 /var/log/logstash

# mkdir -p /var/log/logstash && chown logstash.logstash -R /var/log/logstash

下面是需要修改的部分

start() {

  # Ensure the log directory is setup correctly.
  if [ ! -d "/var/log/logstash" ]; then
    mkdir "/var/log/logstash"
    chown "$user":"$group" -R "/var/log/logstash"
    chmod 755 "/var/log/logstash"
  fi


  # Setup any environmental stuff beforehand
  ulimit -n ${limit_open_files}

  # Run the program!
  nice -n "$nice" \
  chroot --userspec "$user":"$group" "$chroot" sh -c "
    ulimit -n ${limit_open_files}
    cd \"$chdir\"
    exec \"$program\" $args
  " >> /var/log/logstash/logstash-stdout.log 2>> /var/log/logstash/logstash-stderr.log &

  # Generate the pidfile from here. If we instead made the forked process
  # generate it there will be a race condition between the pidfile writing
  # and a process possibly asking for status.
  echo $! > $pidfile

  emit "$name started"
  return 0
}

启动Logstash,并查看日志有无报错

# /etc/init.d/logstash start

安装部署Elasticsearch群集
安装Elasticsearch

# yum localinstall jdk-8u144-linux-x64.rpm -y
# yum localinstall elasticsearch-6.0.0-beta2.rpm -y

配置Elasticsearch

安装路径

# /usr/share/elasticsearch/

配置文件

# /etc/elasticsearch/elasticsearch.yml

Elasticsearch 配置文件信息

# cat elasticsearch.yml | grep -Ev "^$|^#"

cluster.name: elasticsearch
node.name: es01  #其他节点修改相应的节点名
path.data: /data1/elasticsearch
path.logs: /var/log/elasticsearch
bootstrap.system_call_filter: false
network.host: 192.168.162.58 #其他节点修改地址信息
http.port: 9200
discovery.zen.ping.unicast.hosts: ["192.168.162.58", "192.168.162.61", "192.168.162.62", "192.168.162.63", "192.168.162.64"]
discovery.zen.minimum_master_nodes: 3
node.master: true
node.data: true
transport.tcp.compress: true

启动Elasticsearch

# mkdir -p /var/log/elasticsearch && chown elasticsearch.elasticsearch -R /var/log/elasticsearch

# /etc/init.d/elasticsearch start

安装部署Kibana
安装Kibana

# yum localinstall kibana-6.0.0-beta2-x86_64.rpm -y

Kibana配置文件信息

# cat /etc/kibana/kibana.yml | grep -Ev "^$|^#"

server.port: 5601
server.host: "192.168.162.66"
elasticsearch.url: "http://192.168.162.58:9200"  #elasticsearch群集地址,任意一个es节点的地址即可
kibana.index: ".kibana"
pid.file: /var/run/kibana/kibana.pid

启动Kibana

修改kibana启动脚本

# mkdir -p /var/run/kibana
# chown kibana.kibana -R /var/run/kibana

修改kibana启动脚本部分

start() {

  # Ensure the log directory is setup correctly.
  [ ! -d "/var/log/kibana/" ] && mkdir "/var/log/kibana/"
  chown "$user":"$group"  "/var/log/kibana/"
  chmod 755 "/var/log/kibana/"


  # Setup any environmental stuff beforehand


  # Run the program!

  chroot --userspec "$user":"$group" "$chroot" sh -c "

    cd \"$chdir\"
    exec \"$program\" $args
  " >> /var/log/kibana/kibana.stdout 2>> /var/log/kibana/kibana.stderr &

  # Generate the pidfile from here. If we instead made the forked process
  # generate it there will be a race condition between the pidfile writing
  # and a process possibly asking for status.
  echo $! > $pidfile

  emit "$name started"
  return 0
}

启动

#/etc/init.d/kibana start

现在可以访问Kibana页面 Http://192.168.162.66:5601

运维网声明 1、欢迎大家加入本站运维交流群:群②:261659950 群⑤:202807635 群⑦870801961 群⑧679858003
2、本站所有主题由该帖子作者发表,该帖子作者与运维网享有帖子相关版权
3、所有作品的著作权均归原作者享有,请您和我们一样尊重他人的著作权等合法权益。如果您对作品感到满意,请购买正版
4、禁止制作、复制、发布和传播具有反动、淫秽、色情、暴力、凶杀等内容的信息,一经发现立即删除。若您因此触犯法律,一切后果自负,我们对此不承担任何责任
5、所有资源均系网友上传或者通过网络收集,我们仅提供一个展示、介绍、观摩学习的平台,我们不对其内容的准确性、可靠性、正当性、安全性、合法性等负责,亦不承担任何法律责任
6、所有作品仅供您个人学习、研究或欣赏,不得用于商业或者其他用途,否则,一切后果均由您自己承担,我们对此不承担任何法律责任
7、如涉及侵犯版权等问题,请您及时通知我们,我们将立即采取措施予以解决
8、联系人Email:admin@iyunv.com 网址:www.yunweiku.com

所有资源均系网友上传或者通过网络收集,我们仅提供一个展示、介绍、观摩学习的平台,我们不对其承担任何法律责任,如涉及侵犯版权等问题,请您及时通知我们,我们将立即处理,联系人Email:kefu@iyunv.com,QQ:1061981298 本贴地址:https://www.yunweiku.com/thread-404337-1-1.html 上篇帖子: CDH 5 安装教程,Kafka 安装,LZO 安装 下篇帖子: ELK搭建实时日志分析平台(elk+kafka+metricbeat)-metricbeat客户端...
您需要登录后才可以回帖 登录 | 立即注册

本版积分规则

扫码加入运维网微信交流群X

扫码加入运维网微信交流群

扫描二维码加入运维网微信交流群,最新一手资源尽在官方微信交流群!快快加入我们吧...

扫描微信二维码查看详情

客服E-mail:kefu@iyunv.com 客服QQ:1061981298


QQ群⑦:运维网交流群⑦ QQ群⑧:运维网交流群⑧ k8s群:运维网kubernetes交流群


提醒:禁止发布任何违反国家法律、法规的言论与图片等内容;本站内容均来自个人观点与网络等信息,非本站认同之观点.


本站大部分资源是网友从网上搜集分享而来,其版权均归原作者及其网站所有,我们尊重他人的合法权益,如有内容侵犯您的合法权益,请及时与我们联系进行核实删除!



合作伙伴: 青云cloud

快速回复 返回顶部 返回列表