设为首页 收藏本站
查看: 833|回复: 0

[经验分享] ELK5.3环境部署

[复制链接]

尚未签到

发表于 2019-1-28 10:59:21 | 显示全部楼层 |阅读模式
  1、环境说明

  服务器角色:

  192.168.50.211         kafka+zookeeper
  192.168.50.212          kafka+zookeeper
  192.168.50.213         kafka+zookeeper
  192.168.50.214         nginx filebeat logstash
  192.168.50.215       elasticsearch kibana logstash
  软件版本:
  kafka_2.12-0.10.2.1.tgz
  zookeeper-3.4.10.tar.gz
  filebeat-5.3.2-linux-x86_64.tar.gz
  kibana-5.3.2-linux-x86_64.tar.gz
  logstash-5.3.2.tar.gz
  elasticsearch-5.3.2.tar.gz
  x-pack-5.3.2.zip
  安装es的系统的系统版本:
  # uname -a
  Linux ansibleer 2.6.32-431.el6.x86_64 #1 SMP Fri Nov 22 03:15:09 UTC 2013 x86_64 x86_64 x86_64 GNU/Linux
  # cat /etc/issue
  CentOS release 6.5 (Final)
  Kernel \r on an \m
  # uname -r
  2.6.32-431.el6.x86_64
  2、es安装配置
  2.1、
  内核升级
  5以上版本的es需要部署在linux内核的版本3.0以上的系统上
  内核下载:
  https://www.kernel.org/
  longterm:3.10.1052017-02-10[tar.xz]
  安装依赖:
  # yum groupinstall "Development Tools"
  # yum install ncurses-devel
  # yum install qt-devel
  # yum install hmaccalc zlib-devel binutils-devel elfutils-libelf-devel
  升级内核:
  编译内核配置环境
  # tar -xf linux-3.10.105.tar.xz -C /usr/local
  # cd /usr/local/linux-3.10.105
  # cp /boot/config-2.6.32-431.el6.x86_64 .config
  # sh -c 'yes "" | make oldconfig'
  HOSTCC  scripts/basic/fixdep
  HOSTCC  scripts/kconfig/conf.o
  SHIPPED scripts/kconfig/zconf.tab.c
  SHIPPED scripts/kconfig/zconf.lex.c
  SHIPPED scripts/kconfig/zconf.hash.c
  HOSTCC  scripts/kconfig/zconf.tab.o
  HOSTLD  scripts/kconfig/conf
  scripts/kconfig/conf --oldconfig Kconfig
  ............
  CRC8 function (CRC8) [M/y/?] (NEW)
  XZ decompression support (XZ_DEC) [Y/?] (NEW) y
  x86 BCJ filter decoder (XZ_DEC_X86) [Y/n] (NEW)
  PowerPC BCJ filter decoder (XZ_DEC_POWERPC) [N/y] (NEW)
  IA-64 BCJ filter decoder (XZ_DEC_IA64) [N/y] (NEW)
  ARM BCJ filter decoder (XZ_DEC_ARM) [N/y] (NEW)
  ARM-Thumb BCJ filter decoder (XZ_DEC_ARMTHUMB) [N/y] (NEW)
  SPARC BCJ filter decoder (XZ_DEC_SPARC) [N/y] (NEW)
  XZ decompressor tester (XZ_DEC_TEST) [N/m/y/?] (NEW)
  Averaging functions (AVERAGE) [Y/?] y
  CORDIC algorithm (CORDIC) [M/y/?] m
  JEDEC DDR data (DDR) [N/y/?] (NEW)
  #
  # configuration written to .config
  #
  使用cpu的个数作为参数去编译内核
  # grep pro /proc/cpuinfo | wc -l
  2
  # make -j2 bzImage
  # make -j2 modules
  # make -j2 modules_install
  # make install
  sh /usr/local/linux-3.10.105/arch/x86/boot/install.sh 3.10.105 arch/x86/boot/bzImage \
  System.map "/boot"
  ERROR: modinfo: could not find module vsock
  ERROR: modinfo: could not find module vmci
  ERROR: modinfo: could not find module vmware_balloon
  # vi /etc/grub.conf
  default=1
  timeout=5
  splashimage=(hd0,0)/grub/splash.xpm.gz
  hiddenmenu
  title CentOS (3.10.105)
  root (hd0,0)
  kernel /vmlinuz-3.10.105 ro root=/dev/mapper/vg_ansibleer-lv_root rd_LVM_LV=vg_ansibleer/lv_swap rd_NO_LUKS rd_LVM_LV=vg_ansibleer/lv_root rd_NO_MD SYSFONT=latarcyrheb-sun16 crashkernel=128M  KEYBOARDTYPE=pc KEYTABLE=us rd_NO_DM LANG=en_US.UTF-8 rhgb quiet
  initrd /initramfs-3.10.105.img
  title CentOS (2.6.32-431.el6.x86_64)
  root (hd0,0)
  kernel /vmlinuz-2.6.32-431.el6.x86_64 ro root=/dev/mapper/vg_ansibleer-lv_root rd_LVM_LV=vg_ansibleer/lv_swap rd_NO_LUKS rd_LVM_LV=vg_ansibleer/lv_root rd_NO_MD SYSFONT=latarcyrheb-sun16 crashkernel=128M  KEYBOARDTYPE=pc KEYTABLE=us rd_NO_DM LANG=en_US.UTF-8 rhgb quiet
  initrd /initramfs-2.6.32-431.el6.x86_64.img
  修改default=1 to default=0
  # reboot
  # uname -r
  3.10.105
  2.2、es编译安装
  环境配置:
  # vi /etc/security/limits.conf
  添加
  *               soft    nproc           65536
  *               hard    nproc           65536
  *               soft    nofile          65536
  *               hard    nofile          65536
  # vi /etc/sysctl.conf
  添加
  fs.file-max=65536
  vm.max_map_count=262144
  # sysctl -p
  # vi /etc/security/limits.d/90-nproc.conf
  *          soft    nproc     1024
  修改为
  *          soft    nproc     2048
  配置java环境:
  # tar -zxf jdk-8u101-linux-x64.tar.gz -C /usr/local
  # vi /etc/profile
  添加
  export JAVA_HOME=/usr/local/jdk1.8.0_101
  export CLASSPATH=.:$JAVA_HOME/jre/lib/rt.jar:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar
  export PATH=$PATH:$JAVA_HOME/bin
  # source /etc/profile
  编译安装及配置es
  # groupadd esuser
  # useradd -g esuser -d /home/esuser -m esuser
  # passwd esuser
  # tar -zxf elasticsearch-5.3.2.tar.gz -C /usr/local
  # cd /usr/local
  # ln -sv elasticsearch-5.3.2 elasticsearch
  `elasticsearch' -> `elasticsearch-5.3.2'
  # mkdir -pv /data/elasticsearch/{data,logs}
  mkdir: created directory `/data'
  mkdir: created directory `/data/elasticsearch'
  mkdir: created directory `/data/elasticsearch/data'
  mkdir: created directory `/data/elasticsearch/logs'
  # chown -R esuser:esuser /data/
  # ll -d /data
  drwxr-xr-x. 3 esuser esuser 4096 May  5 13:21 /data
  # vi /usr/local/elasticsearch/config/elasticsearch.yml
  cluster.name: es-cluster
  node.name: es-node
  path.data: /data/elasticsearch/data
  path.logs: /data/elasticsearch/logs
  network.host: 182.180.117.200
  http.port: 9200
  bootstrap.memory_lock: false
  bootstrap.system_call_filter: false
  启动服务
  # su - esuser -c "/usr/local/elasticsearch/bin/elasticsearch &"
  这个时候需要注意/usr/local/elsticsearch目录以及目录下的文件的权限,所属者和所属组均是esuser
  开通防火墙相应端口
  # vi /etc/sysconfig/iptables
  在80的规则下添加9200的规则
  -A INPUT -p tcp --dport 80 -j ACCEPT
  -A INPUT -p tcp --dport 9200 -j ACCEPT
  浏览器输入http://192.168.50.215:9200/
  得到结果
  {
  "name" : "es-node",
  "cluster_name" : "es-cluster",
  "cluster_uuid" : "F3vEJMuvTxmwrT5j049GPA",
  "version" : {
  "number" : "5.3.2",
  "build_hash" : "3068195",
  "build_date" : "2017-04-24T16:15:59.481Z",
  "build_snapshot" : false,
  "lucene_version" : "6.4.2"
  },
  "tagline" : "You Know, for Search"
  }
  2.3、215 logstash环境配置
  # tar -zxf logstash-5.3.2.tar.gz -C /usr/local
  # cd /usr/local
  # ln -sv logstash-5.3.2 logstash
  `logstash' -> `logstash-5.3.2'
  # cd /usr/local/logstash/config
  可以使用下面的简单测试系统之间的连通性
  # vi logstash-simple.conf
  input { stdin { } }
  output {
  elasticsearch {
  hosts => ["182.180.17.200:9200"]
  user => elastic
  password => changeme
  }
  stdout { codec => rubydebug }
  }
  3、kibana环境配置及插件安装
  # tar -zxf kibana-5.3.2-linux-x86_64.tar.gz -C /usr/local
  # cd /usr/local
  # ln -sv kibana-5.3.2-linux-x86_64 kibana
  `kibana' -> `kibana-5.3.2-linux-x86_64'
  # cd kibana/bin
  # ./kibana-plugin install file:///root/x-pack-5.3.2.zip
  es插件安装
  # cd /usr/local/elasticsearch/bin
  # su - esuser -c "/usr/local/elasticsearch/bin/elasticsearch-plugin install file:///root/x-pack-5.3.2.zip"
  -> Downloading file:///root/x-pack-5.3.2.zip
  [=================================================] 100%??
  @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
  @     WARNING: plugin requires additional permissions     @
  @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
  * java.lang.RuntimePermission accessClassInPackage.com.sun.activation.registries
  * java.lang.RuntimePermission getClassLoader
  * java.lang.RuntimePermission setContextClassLoader
  * java.lang.RuntimePermission setFactory
  * java.security.SecurityPermission createPolicy.JavaPolicy
  * java.security.SecurityPermission getPolicy
  * java.security.SecurityPermission putProviderProperty.BC
  * java.security.SecurityPermission setPolicy
  * java.util.PropertyPermission * read,write
  * java.util.PropertyPermission sun.nio.ch.bugLevel write
  * javax.net.ssl.SSLPermission setHostnameVerifier
  See http://docs.oracle.com/javase/8/docs/technotes/guides/security/permissions.html
  for descriptions of what these permissions allow and the associated risks.
  Continue with installation? [y/N]y
  -> Installed x-pack
  # cd /usr/local/elasticsearch/config
  # vi elasticsearch.yml
  添加
  #x-pack authc
  xpack.security.authc
  anonymous:
  username: guest
  roles: superuser
  authz_exception: true
  # cd /usr/local/kibana/config
  # vi kibana.yml
  server.host: "182.180.117.200"
  elasticsearch.url: "http://182.180.117.200:9200"
  elasticsearch.username: "elastic"
  elasticsearch.password: "changeme"
  pid.file: /var/run/kibana.pid
  # ps -ef | grep elasticsearsh
  kill掉es进程
  再次启动
  # su - esuser -c "/usr/local/elasticsearch/bin/elasticsearch &"
  当可以看到端口打开,如下所示,说明启动完成
  # netstat -an | grep 9200
  tcp        0      0 ::ffff:182.180.117.200:9200 :::*                        LISTEN
  # cd /usr/local/logstash/bin
  # ./logstash -f ../config/logstash-simple.conf
  # cd bin
  # ./kibana
  # vi /etc/sysconfig/iptables
  在9200规则下添加
  -A INPUT -p tcp --dport 5601 -j ACCEPT
  浏览器输入http://192.168.50.215:5601
  出现图示:kibana-1
  输入用户名和密码: elastic changeme
  4、kafka和zookeeper集群搭建配置
  kafka和zookeeper集群的搭建参考我之前es2版本的部署配置文章
  链接:
  http://xiaoxiaozhou.blog.运维网.com/4681537/1854684
  5、nginx日志处理
  214上搭建有nginx的应用作为多个服务的反向代理
  nginx配置:
  # cd /usr/local/nginx
  # vi conf/nginx.conf
  log_format  main  '$remote_addr - $upstream_addr - $remote_user [$time_local] "$request" '
  '$status $upstream_status $body_bytes_sent "$http_referer" '
  '"$http_user_agent" "$http_x_forwarded_for"'
  '$request_time - $upstream_cache_status' ;
  nginx access日志示例:

  IP1 - IP2:port - - [11/May/2017:14:18:31 +0800] "GET /content/dam/phone/emv/index.html HTTP/1.1" 304 304 0 "-" "Mozilla/5.0 (iPhone; CPU iPhone OS 8_0 like Mac OS X) AppleWebKit/600.1.3 (KHTML, like Gecko) Version/8.0 Mobile/12A4345d Safari/600.1.4" "IP3, IP4, IP5, IP6"0.007 - -
  $remote_addr客户端地址
  $upstream_addr后台upstream的地址,即真正提供服务的主机地址
  $remote_user客户端用户名称
  $time_local访问时间和时区
  $request请求的URI和HTTP协议
  $statusHTTP请求状态
  $upstream_statusupstream状态
  $body_bytes_sent发送给客户端文件内容大小
  $http_refererurl跳转来源
  $http_user_agent用户终端浏览器等信息
  $http_x_forwarded_for记录客户端的ip地址,客户端IP,Nginx负载均衡服务器IP
  $request_time  整个请求的总时间
  $upstream_cache_status  缓存的状态
  %{IPORHOST:client_ip}
  (%{URIHOST:upstream_host}|-)
  %{USER:ident} %{USER:auth}
  \[%{HTTPDATE:timestamp}\]
  \"(?:%{WORD:verb} %{NOTSPACE:request}(?: HTTP/%{NUMBER:http_version})?|-)"
  %{HOST:domain} %{NUMBER:response}
  (?:%{NUMBER:bytes}|-)
  %{QS:referrer}
  %{QS:agent}
  "(%{WORD:x_forword}|-)"
  (%{BASE16FLOAT:request_time})
  0\"-\"
# tar -zxf logstash-5.3.2.tar.gz -C /usr/local


# cd /usr/local


# ln -sv logstash-5.3.2 logstash
`logstash' -> `logstash-5.3.2'


# cd logstash


# mkdir patterns


  # vi patterns/nginx
  NGINXACCESS %{IPORHOST:client_ip} (%{URIHOST:upstream_host}|-) %{USER:ident} %{USER:auth} \[%{HTTPDATE:timestamp}\] \"(?:%{WORD:verb} %{NOTSPACE:request}(?: HTTP/%{NUMBER:http_version})?|-)" %{NUMBER:response} (?:%{NUMBER:bytes}|-) %{QS:referrer} %{QS:agent} "(%{WORD:x_forword}|-)" (%{BASE16FLOAT:request_time}) 0\"-\"
  注意:logstash也需要jdk1.8的支持
  214的logstash管道文件
  # vi /usr/local/logstash/config/logstash_in_nginx.conf
  input {
  file {
  type => "nginx-access"
  path => "/usr/local/nginx/logs/access.log"
  tags => [ "nginx","access" ]
  }
  file {
  type => "nginx-error"
  path => "/usr/local/nginx/logs/error.log"
  tags => [ "nginx","error" ]
  }
  }
  output {
  stdout { codec => rubydebug }
  kafka {
  bootstrap_servers => "192.168.50.211:9092,192.168.50.212:9092,192.168.50.213:9092"
  topic_id => "access-nginx-messages"
  }
  }
  215的logstash管道文件
  # vi /usr/local/logstash/config/logstash_nginx_indexer.conf
  input {
  kafka {
  bootstrap_servers => "192.168.50.211:9092,192.168.50.212:9092,192.168.50.213:9092"
  topics => ["access-nginx-messages"]
  }
  }
  filter {
  if [type] == "nginx-access" {
  grok{
  patterns_dir => "/usr/local/logstash/patterns"
  match => {
  "message" => "%{NGINXACCESS}"
  }
  }
  date{
  match=>["time","dd/MMM/yyyy:HH:mm:ss"]
  target=>"logdate"
  }
  ruby{
  code => "event.set('logdateunix',event.get('logdate').to_i)"
  }
  }
  else if [type] == "nginx-error" {
  grok {
  match => [
  "message", "(?\d{4}/\d{2}/\d{2}\s{1,}\d{2}:\d{2}:\d{2})\s{1,}\[%{DATA:err_severity}\]\s{1,}(%{NUMBER:pid:int}#%{NUMBER}:\s{1,}\*%{NUMBER}|\*%{NUMBER}) %{DATA:err_message}(?:,\s{1,}client:\s{1,}(?%{IP}|%{HOSTNAME}))(?:,\s{1,}server:\s{1,}%{IPORHOST:server})(?:, request: %{QS:request})?(?:, host: %{QS:client_ip})?(?:, referrer: \"%{URI:referrer})?",
  "message", "(?\d{4}/\d{2}/\d{2}\s{1,}\d{2}:\d{2}:\d{2})\s{1,}\[%{DATA:err_severity}\]\s{1,}%{GREEDYDATA:err_message}"]
  }
  date{
  match=>["time","yyyy/MM/dd HH:mm:ss"]
  target=>"logdate"
  }
  ruby{
  code => "event.set('logdateunix',event.get('logdate').to_i)"
  }
  }
  }
  output {
  stdout { codec => rubydebug }
  elasticsearch {
  hosts => ["192.168.50.215:9200"]
  index => "access-nginx-messages-%{+YYYY.MM.dd}"
  flush_size => 20000
  idle_flush_time => 10
  template_overwrite => true
  }
  }
  6、es索引操作
  查询索引
  # curl 192.168.50.215:9200/_search?pretty=true  | grep _index
  删除nginx-access-messages开头和access-nginx-messages开头的索引
  # curl -XDELETE 'http://192.168.50.215:9200/nginx-access-messages*'
  {"acknowledged":true}[root@ansibleer config]#
  # curl -XDELETE 'http://192.168.50.215:9200/access-nginx-messages*'
  创建索引
  # curl -XPUT '192.168.50.215:9200/customer?pretty'
  {
  "acknowledged" : true,
  "shards_acknowledged" : true
  }
  查询所有索引
  # curl '192.168.50.215:9200/_cat/indices?v'
  7、214服务器filebeat、logstash配置
  # tar -zxf filebeat-5.3.2-linux-x86_64.tar.gz -C /usr/local
  # cd /usr/local
  # ln -s filebeat-5.3.2 filebeat
  # cd filebeat
  # egrep -v "#|^$" filebeat.yml
  filebeat.prospectors:
  - input_type: log
  paths:
  - /usr/local/nginx/logs/access.log
  output.logstash:
  hosts: ["192.168.50.214:5043"]
  # cd /usr/local/logstash
  # cat config/logstash_in_nginx.conf
  input {
  beats {
  port => "5043"
  }
  }
  filter {
  grok {
  patterns_dir => "/usr/local/logstash/patterns"
  match => {
  "message" => "%{NGINXACCESS}"
  }
  }
  }
  output {
  stdout { codec => rubydebug }
  kafka {
  bootstrap_servers => "192.168.50.211:9092,192.168.50.212:9092,192.168.50.213:9092"
  topic_id => "access-nginx-messages"
  }
  }
  215服务器filebeat配置
  # tar -zxf filebeat-5.3.2-linux-x86_64.tar.gz -C /usr/local
  # cd /usr/local
  # ln -s filebeat-5.3.2-linux-x86_64 filebeat
  # cd filebeat
  # egrep -v "#|^$" filebeat.yml
  filebeat.prospectors:
  - input_type: log
  paths:
  - /usr/local/nginx/logs/access.log
  output.logstash:
  hosts: ["192.168.50.215:5043"]
  # nohup ./filebeat -c filebeat.yml &
  215服务器的logstash管道文件
  # cat /usr/local/logstash/config/logstash_in_nginx.conf
  input {
  beats {
  port => "5043"
  }
  }
  filter {
  grok {
  patterns_dir => "/usr/local/logstash/patterns"
  match => {
  "message" => "%{NGINXACCESS}"
  }
  }
  }
  output {
  stdout { codec => rubydebug }
  kafka {
  bootstrap_servers => "192.168.50.211:9092,192.168.50.212:9092,192.168.50.213:9092"
  topic_id => "access-nginx-messages"
  }
  }




运维网声明 1、欢迎大家加入本站运维交流群:群②:261659950 群⑤:202807635 群⑦870801961 群⑧679858003
2、本站所有主题由该帖子作者发表,该帖子作者与运维网享有帖子相关版权
3、所有作品的著作权均归原作者享有,请您和我们一样尊重他人的著作权等合法权益。如果您对作品感到满意,请购买正版
4、禁止制作、复制、发布和传播具有反动、淫秽、色情、暴力、凶杀等内容的信息,一经发现立即删除。若您因此触犯法律,一切后果自负,我们对此不承担任何责任
5、所有资源均系网友上传或者通过网络收集,我们仅提供一个展示、介绍、观摩学习的平台,我们不对其内容的准确性、可靠性、正当性、安全性、合法性等负责,亦不承担任何法律责任
6、所有作品仅供您个人学习、研究或欣赏,不得用于商业或者其他用途,否则,一切后果均由您自己承担,我们对此不承担任何法律责任
7、如涉及侵犯版权等问题,请您及时通知我们,我们将立即采取措施予以解决
8、联系人Email:admin@iyunv.com 网址:www.yunweiku.com

所有资源均系网友上传或者通过网络收集,我们仅提供一个展示、介绍、观摩学习的平台,我们不对其承担任何法律责任,如涉及侵犯版权等问题,请您及时通知我们,我们将立即处理,联系人Email:kefu@iyunv.com,QQ:1061981298 本贴地址:https://www.yunweiku.com/thread-668622-1-1.html 上篇帖子: 【官网最新版】ELK 6.4 实时日志分析平台部署 下篇帖子: ELK实战之使用filebeat代替logstash收集日志
您需要登录后才可以回帖 登录 | 立即注册

本版积分规则

扫码加入运维网微信交流群X

扫码加入运维网微信交流群

扫描二维码加入运维网微信交流群,最新一手资源尽在官方微信交流群!快快加入我们吧...

扫描微信二维码查看详情

客服E-mail:kefu@iyunv.com 客服QQ:1061981298


QQ群⑦:运维网交流群⑦ QQ群⑧:运维网交流群⑧ k8s群:运维网kubernetes交流群


提醒:禁止发布任何违反国家法律、法规的言论与图片等内容;本站内容均来自个人观点与网络等信息,非本站认同之观点.


本站大部分资源是网友从网上搜集分享而来,其版权均归原作者及其网站所有,我们尊重他人的合法权益,如有内容侵犯您的合法权益,请及时与我们联系进行核实删除!



合作伙伴: 青云cloud

快速回复 返回顶部 返回列表