设为首页 收藏本站
查看: 903|回复: 0

[经验分享] 实战之elasticsearch集群及filebeat server和logstash server

[复制链接]

尚未签到

发表于 2017-12-8 08:09:24 | 显示全部楼层 |阅读模式
  author:JevonWei
  
版权声明:原创作品
  
blog:http://119.23.52.191/

实战之elasticsearch集群及filebeat server和logstash server
  环境
  

elasticsearch集群节点环境为172.16.100.120:9200,172.16.100.121:9200,172.16.100.122:9200  
logstash server服务端为172.16.100.121
  
filebeat server服务端为172.16.100.121
  
httpd服务产生日志信息 172.16.100.121
  
redis服务端172.16.253.181
  
kibana服务端172.16.253.181
  
tomcat server端172.16.253.145
  

  网络拓扑图

  elasticsearch集群构建如上,在此省略

filebeat server
  

下载filebeat程序包  https://www.elastic.co/downloads/beats/filebeat
  

  
[iyunv@node4 ~]# ls filebeat-5.5.1-x86_64.rpm
  
filebeat-5.5.1-x86_64.rpm
  

  安装filebeat
  

[iyunv@node4 ~]# yum -y install filebeat-5.5.1-x86_64.rpm  
[iyunv@node4 ~]# rpm -ql filebeat
  

  编辑filebeat.yml文件
  

[iyunv@node2 ~]# vim /etc/filebeat/filebeat.yml  
- input_type: log
  - /var/log/httpd/access_log*    指定数据的输入路径为access_log开头的所有文件
  
output.logstash:     \\数据输出到logstash中
  # The Logstash hosts
  hosts: ["172.16.100.121:5044"]   \\指定logstash服务端
  

  启动服务
  

[iyunv@node4 ~]# systemctl start filebeat   

logstash server
  安装java环境
  

[iyunv@node2 ~]# yum -y install java-1.8.0-openjdk-devel  

  下载logstash程序
  

https://www.elastic.co/downloads/logstash  

  安装logstash程序
  

[iyunv@node2 ~]# ll logstash-5.5.1.rpm  
-rw-r--r--. 1 root root 94158545 Aug 21 17:06 logstash-5.5.1.rpm
  
[iyunv@node4 ~]# rpm -ivh logstash-5.5.1.rpm
  

  编辑logstash的配置文件
  

[iyunv@node2 ~]# vim /etc/logstash/logstash.yml文件配置  
path.data: /var/lib/logstash            数据存放路径
  
path.config: /etc/logstash/conf.d       配置文件的读取路径
  
path.logs: /var/log/logstash            日志文件的保存路径
  
[iyunv@node2 ~]# vim  /etc/logstash/jvm.options环境文件
  
-Xms256m   启用的内存大小
  
-Xmx1g
  

  编辑/etc/logstash/conf.d/test.conf 文件
  

[iyunv@node4 ~]# vim /etc/logstash/conf.d/test.conf  
input {
  beats {
  port => 5044
  }
  
}
  

  
filter {
  grok {
  match => {
  "message" => "%{COMBINEDAPACHELOG}"
  }
  remove_field => "message"   \\只显示message字段的数据
  }
  
}
  

  
output {
  elasticsearch {
  hosts => ["http://172.16.100.120:9200","http://172.16.100.121:9200","http://172.16.100.122:9200"]
  index => "logstash-%{+YYYY.MM.dd}"
  action => "index"
  }
  
}
  

  测试/etc/logstash/conf.d/test.conf文件语法
  

[iyunv@node2 ~]# /usr/share/logstash/bin/logstash -t -f /etc/logstash/conf.d/test.conf  

  执行/etc/logstash/conf.d/test.conf文件
  

[iyunv@node2 ~]#/usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/test.conf  

  客户端访问测试索引信息是否生成
  

[iyunv@node5 ~]#curl -XGET '172.16.100.120:9200/_cat/indices'  
green open logstash-2017.10.12 baieaGWpSN-BA28dAlqxhA 5 1 27 0 186.7kb 93.3kb
  

从redis插件读取采集数据
  Redis
  

[iyunv@node4 ~]# yum -y install redis  
[iyunv@node4 ~]# vim /etc/redis.conf
  
bind 0.0.0.0                监听所有地址
  
requirepass danran      设定密码为danran
  
[iyunv@node4 ~]# systemctl restart redis
  

  连接测试redis是否正常
  

[iyunv@node4 ~]# redis-cli -h 172.16.253.181 -a danran  
172.16.253.181:6379>
  

  配置logstash server文件
  

[iyunv@node2 ~]# vim /etc/logstash/conf.d/redis-input.conf  
input {
  redis {
  host => "172.16.253.181"
  port => "6379"
  password => "danran"
  db => "0"
  data_type => "list"    \\定义数据类型为列表格式
  key => "filebeat"   \\定义key为filebeat,与filebeat.yml定义key一致
  }
  
}
  

  
filter {
  grok {
  match => {
  "message" => "%{COMBINEDAPACHELOG}"
  }
  remove_field => "message"
  }
  mutate {
  rename => {"clientip" => "[httpd][clientip]" }
  }
  
}
  

  
output {
  elasticsearch {
  hosts => ["http://172.16.100.120:9200","http://172.16.100.121:9200","http://172.16.100.122:9200"]
  index => "logstash-%{+YYYY.MM.dd}"
  action => "index"
  }
  
}
  

  重启logstash server
  

[iyunv@node2 ~]# systemctl restart logstash  

配置filebeat的数据输出到redis
  

[iyunv@node2 ~]# vim /etc/filebeat/filebeat.yml  
- input_type: log
  - /var/log/httpd/access_log*    指定数据的输入路径为access_log开头的所有文件
  

  
#-----------------------redis output ---------------------------
  

  
output.redis:
  hosts: ["172.16.253.181"]    \\redis服务端
  port: "6379"
  password: "danran"        \\redis密码
  key: "filebeat"               \\定义key,与redis-input.conf文件中input字段的key保存一致
  db: 0                         \\指定输出的数据库为0
  timeout: 5
  

  重启filebeat
  

[iyunv@node2 ~]# systemctl restart filebeat      

  客户端访问httpd服务
  

[iyunv@node1 ~]# curl 172.16.100.121  
test page
  

  登录redis数据库查看数据是否采集
  

[iyunv@node4 ~]# redis-cli -h 172.16.253.181 -a danran  

  查看elasticsearch集群是否采集数据
  
http://www.cnblogs.com/JevonWei/p/http%3A%2F%2Fa.hiphotos.baidu.com%2Fimage%2Fpic%2Fitem%2F8435e5dde71190ef5e76f0bfc51b9d16fcfa60c4.jpg

启用geoip插件
  下载geoip程序安装
  
[iyunv@node2 ~]# ll GeoLite2-City.tar.gz
  
-rw-r--r--. 1 root root 25511308 Aug 21 05:06 GeoLite2-City.tar.gz
  

[iyunv@node2 ~]# cd GeoLite2-City_20170704/  
[iyunv@node2 GeoLite2-City_20170704]# mv GeoLite2-City.mmdb /etc/logstash/
  

  配置logstash server文件
  

[iyunv@node2 ~]# vim /etc/logstash/conf.d/geoip.conf  
input {
  redis {
  host => "172.16.253.181"
  port => "6379"
  password => "danran"
  db => "0"
  data_type => "list"    \\定义数据类型为列表格式
  key => "filebeat"   \\定义key为filebeat,与filebeat.yml定义key一致
  }
  
}
  

  
filter {
  grok {
  match => {
  "message" => "%{COMBINEDAPACHELOG}"
  }
  remove_field => "message"
  }
  geoip {
  source => "clientip"    指定客户端IP查找
  target => "geoip"
  database => "/etc/logstash/GeoLite2-City.mmdb"  \\指定geoip数据库文件
  }
  
}
  

  
output {
  elasticsearch {
  hosts => ["http://172.16.100.120:9200","http://172.16.100.121:9200","http://172.16.100.122:9200"]
  index => "logstash-%{+YYYY.MM.dd}"
  action => "index"
  }
  
}
  

  测试redis-input.conf文件语法
  

[iyunv@node2 ~]#  /usr/share/logstash/bin/logstash -t -f /etc/logstash/conf.d/redis-input.conf   

  指定配置文件启动logstash
  

[iyunv@node2 ~]#  /usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/redis-input.conf      

  客户端访问httpd
  

[iyunv@node1 ~]# curl 172.16.100.121         test page  

  查看elasticsearch-head中信息,可根据IP地址查询区域
  
http://www.cnblogs.com/JevonWei/p/http%3A%2F%2Fb.hiphotos.baidu.com%2Fimage%2Fpic%2Fitem%2Faec379310a55b3197253e01548a98226cffc1723.jpg
  模仿两条外部的访问日志信息
  

[iyunv@node2 ~]# echo '72.16.100.120 - - [11/Oct/2017:22:32:21 -0400] "GET / HTTP/1.1" 200 10 "-" "curl/7.29.0"' >> /var/log/httpd/access_log  

  
[iyunv@node2 ~]# echo '22.16.100.120 - - [11/Oct/2017:22:32:21 -0400] "GET / HTTP/1.1" 200 10 "-" "curl/7.29.0"' >> /var/log/httpd/access_log
  

  查看elasticsearch-head中信息,可根据IP地址查询区域
  
http://www.cnblogs.com/JevonWei/p/http%3A%2F%2Fd.hiphotos.baidu.com%2Fimage%2Fpic%2Fitem%2F279759ee3d6d55fb61b5144666224f4a20a4dd5b.jpg

启用kibana插件
  下载安装kibana
  

[iyunv@node4 ~]# ls kibana-5.5.1-x86_64.rpm  
kibana-5.5.1-x86_64.rpm
  
[iyunv@node4 ~]# rpm -ivh kibana-5.5.1-x86_64.rpm
  

  配置kibana.yml文件
  

[iyunv@node4 ~]# vim /etc/kibana/kibana.yml  
server.port: 5601           监听端口
  
server.host: "0.0.0.0"    监听地址
  
elasticsearch.url: "http://172.16.100.120:9200" 指定elasticsearch集群中的某个节点URL
  

  启动服务
  

[iyunv@node4 ~]# systemctl start kibana brandbot  
[iyunv@node4 ~]# ss -ntl   监听5601端口
  

  配置logstash server数据采集文件
  

[iyunv@node2 ~]# vim /etc/logstash/conf.d/geoip.conf  
input {
  redis {
  host => "172.16.253.181"
  port => "6379"
  password => "danran"
  db => "0"
  data_type => "list"    \\定义数据类型为列表格式
  key => "filebeat"   \\定义key为filebeat,与filebeat.yml定义key一致
  }
  
}
  

  
filter {
  grok {
  match => {
  "message" => "%{COMBINEDAPACHELOG}"
  }
  remove_field => "message"
  }
  geoip {
  source => "clientip"    指定客户端IP查找
  target => "geoip"
  database => "/etc/logstash/GeoLite2-City.mmdb"  \\指定geoip数据库文件
  }
  
}
  

  
output {
  elasticsearch {
  hosts => ["http://172.16.100.120:9200","http://172.16.100.121:9200","http://172.16.100.122:9200"]
  index => "logstash-%{+YYYY.MM.dd}"
  action => "index"
  }
  
}
  

  测试redis-input.conf文件语法
  

[iyunv@node2 ~]#  /usr/share/logstash/bin/logstash -t -f /etc/logstash/conf.d/redis-input.conf   

  指定配置文件启动logstash
  

[iyunv@node2 ~]#  /usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/redis-input.conf    

  elasticsearch-head中查看信息
  
http://www.cnblogs.com/JevonWei/p/http%3A%2F%2Fa.hiphotos.baidu.com%2Fimage%2Fpic%2Fitem%2F79f0f736afc379319292e549e0c4b74542a911d3.jpg
  Web加载kibana
  

浏览器键入http://172.16.253.181:5601  


  
http://www.cnblogs.com/JevonWei/p/http%3A%2F%2Fg.hiphotos.baidu.com%2Fimage%2Fpic%2Fitem%2F5fdf8db1cb13495437328a045d4e9258d1094aa5.jpg
  图形显示访问数据统计
  
http://www.cnblogs.com/JevonWei/p/http%3A%2F%2Fg.hiphotos.baidu.com%2Fimage%2Fpic%2Fitem%2Fa6efce1b9d16fdfa8141a58ebf8f8c5494ee7ba4.jpg
  
http://www.cnblogs.com/JevonWei/p/http%3A%2F%2Fe.hiphotos.baidu.com%2Fimage%2Fpic%2Fitem%2Ff9dcd100baa1cd1116ac1140b212c8fcc3ce2db3.jpg

采集监控tomcat节点日志
  安装tomcat服务
  

[iyunv@node5 ~]# yum -y install tomcat tomcat-webapps tomcat-admin-webapps tomcat-docs-webapp  
[iyunv@node5 ~]# systemctl start tomcat
  
[iyunv@node5 ~]# ss -ntl   \\8080端口已监听
  

  查看日志文件路径
  

[iyunv@node5 ~]# ls /var/log/tomcat/localhost_access_log.2017-10-12.txt  
/var/log/tomcat/localhost_access_log.2017-10-12.txt
  

  安装filebeat
  

下载filebeat程序包  https://www.elastic.co/downloads/beats/filebeat
  

  
[iyunv@node4 ~]# ls filebeat-5.5.1-x86_64.rpm
  
filebeat-5.5.1-x86_64.rpm
  

  安装filebeat
  

[iyunv@node4 ~]# yum -y install filebeat-5.5.1-x86_64.rpm  
[iyunv@node4 ~]# rpm -ql filebeat
  

  配置filebeat.yml文件
  

[iyunv@node5 ~]# vim /etc/filebeat/filebeat.yml  
- input_type: log
  - /var/log/tomcat/*.txt    数据文件的采集路径
  
#---------------------------redis output ---------------------
  
output.redis:
  hosts: ["172.16.253.181"]
  port: "6379"
  password: "danran"
  key: "fb-tomcat"
  db: 0
  timeout: 5
  

  启动filebeat服务
  

[iyunv@node5 ~]# systemctl start filebeat   

  配置logstash server端数据采集配置文件
  

[iyunv@node2 ~]# vim /etc/logstash/conf.d/tomcat.conf  
input {
  redis {
  host => "172.16.253.181"
  port => "6379"
  password => "danran"
  db => "0"
  data_type => "list"    \\定义数据类型为列表格式
  key => "fb-tomcat"   \\定义key为filebeat,与filebeat.yml定义key一致
  }
  
}
  

  
filter {
  grok {
  match => {
  "message" => "%{COMMONAPACHELOG}"
  }
  remove_field => "message"
  }
  geoip {
  source => "clientip"    指定客户端IP查找
  target => "geoip"
  database => "/etc/logstash/GeoLite2-City.mmdb"  \\指定geoip数据库文件
  }
  
}
  

  
output {
  elasticsearch {
  hosts => ["http://172.16.100.120:9200","http://172.16.100.121:9200","http://172.16.100.122:9200"]
  index => "logstash-tomcat-%{+YYYY.MM.dd}"
  action => "index"
  }
  
}
  

  测试redis-input.conf文件语法
  

[iyunv@node2 ~]#/usr/share/logstash/bin/logstash -t -f /etc/logstash/conf.d/tomcat.conf   

  重新启动logstash
  

[iyunv@node2 ~]# systemctl restart logstash  

  elasticsearch-head中查看是否产生logstash-toncat索引信息
  
http://www.cnblogs.com/JevonWei/p/http%3A%2F%2Fa.hiphotos.baidu.com%2Fimage%2Fpic%2Fitem%2F5fdf8db1cb13495449b988045d4e9258d1094a2e.jpg
  配置kibana可视化查看索引数据
  

浏览器键入http://172.16.253.181:5601  

运维网声明 1、欢迎大家加入本站运维交流群:群②:261659950 群⑤:202807635 群⑦870801961 群⑧679858003
2、本站所有主题由该帖子作者发表,该帖子作者与运维网享有帖子相关版权
3、所有作品的著作权均归原作者享有,请您和我们一样尊重他人的著作权等合法权益。如果您对作品感到满意,请购买正版
4、禁止制作、复制、发布和传播具有反动、淫秽、色情、暴力、凶杀等内容的信息,一经发现立即删除。若您因此触犯法律,一切后果自负,我们对此不承担任何责任
5、所有资源均系网友上传或者通过网络收集,我们仅提供一个展示、介绍、观摩学习的平台,我们不对其内容的准确性、可靠性、正当性、安全性、合法性等负责,亦不承担任何法律责任
6、所有作品仅供您个人学习、研究或欣赏,不得用于商业或者其他用途,否则,一切后果均由您自己承担,我们对此不承担任何法律责任
7、如涉及侵犯版权等问题,请您及时通知我们,我们将立即采取措施予以解决
8、联系人Email:admin@iyunv.com 网址:www.yunweiku.com

所有资源均系网友上传或者通过网络收集,我们仅提供一个展示、介绍、观摩学习的平台,我们不对其承担任何法律责任,如涉及侵犯版权等问题,请您及时通知我们,我们将立即处理,联系人Email:kefu@iyunv.com,QQ:1061981298 本贴地址:https://www.yunweiku.com/thread-422011-1-1.html 上篇帖子: Windows远程命令执行0day漏洞安全预警 下篇帖子: Photon Server伺服务器在LoadBalancing的基础上扩展登陆服务
您需要登录后才可以回帖 登录 | 立即注册

本版积分规则

扫码加入运维网微信交流群X

扫码加入运维网微信交流群

扫描二维码加入运维网微信交流群,最新一手资源尽在官方微信交流群!快快加入我们吧...

扫描微信二维码查看详情

客服E-mail:kefu@iyunv.com 客服QQ:1061981298


QQ群⑦:运维网交流群⑦ QQ群⑧:运维网交流群⑧ k8s群:运维网kubernetes交流群


提醒:禁止发布任何违反国家法律、法规的言论与图片等内容;本站内容均来自个人观点与网络等信息,非本站认同之观点.


本站大部分资源是网友从网上搜集分享而来,其版权均归原作者及其网站所有,我们尊重他人的合法权益,如有内容侵犯您的合法权益,请及时与我们联系进行核实删除!



合作伙伴: 青云cloud

快速回复 返回顶部 返回列表