设为首页 收藏本站
查看: 1397|回复: 0

[经验分享] ELK之使用消息队列收取日志

[复制链接]

尚未签到

发表于 2019-1-28 10:08:14 | 显示全部楼层 |阅读模式
一、ELKStack简介
Elstaicsearch:存储和搜索
logstash:收集
kibana:展示.专门为ES设计的展示平台二、ELK之使用消息队列收取日志
1、环境准备
环境准备
IP主机名操作系统
192.168.56.11linux-node1centos7
192.168.56.12linux-node2centos72、流程分析
日志-->logstash(flume)-->MQ(redis,rabbitmq)-->logstash(python scripts)-->es官网参考链接
https://www.elastic.co/guide/en/logstash/current/input-plugins.html流程图如下:


三、安装配置redis
1、安装redis
编译安装或者yum安装均可,这里我们选择yum安装
在192.168.56.12上执行如下命令:
yum -y install redis2、配置redis
编辑redis配置文件,这里需要修改的有两处
vim /etc/redis.conf
daemonize yes              #是否以后台daemon方式运行
bind 192.168.56.12         #绑定主机
保存退出3、启动redis并测试连接
启动redis:
systemctl start redis
测试连接:
[root@linux-node2 ~]# redis-cli -h 192.168.56.12 -p 6379
192.168.56.12:6379>四、分析利用redis收取日志的过程
1、实现数据能写入redis
在192.168.56.12上编写redis.conf配置文件,实现数据从标准输入写入redis
[root@linux-node2 ~]# cat /etc/logstash/conf.d/redis.conf
input{
    stdin{}
}
output{
    redis{
host => "192.168.56.12"          #主机地址
port => "6379"                   #redis默认端口是6379
db => "6"                        #redis默认db是0,这里我们写入到db 6
data_type => "list"              #我们选择list作为数据类型
key => "demo"                    #数据写入redis时所用的key
    }
}启动logstash,并输入内容进行验证
[root@linux-node2 conf.d]# /opt/logstash/bin/logstash -f redis.conf
Settings: Default pipeline workers: 1
Pipeline main started
papapapa连接redis,选择db,查看数据是否写入redis
[root@linux-node2 ~]# redis-cli -h 192.168.56.12 -p 6379
192.168.56.12:6379> select 6
OK
192.168.56.12:6379[6]> keys *
1) "demo"
192.168.56.12:6379[6]> llen demo
(integer) 1
192.168.56.12:6379[6]> lindex demo -1
"{\"message\":\"papapapa\",\"@version\":\"1\",\"@timestamp\":\"2016-09-04T08:08:11.998Z\",\"host\":\"linux-node2\"}"
192.168.56.12:6379[6]>我们看到可以把数据写入redis
2、实现从文件中读取日志,并写入redis
在192.168.56.11上编写/etc/logstash/conf.d/apache.conf配置文件
[root@linux-node1 /var/log/httpd]# vim /etc/logstash/conf.d/apache.conf
input {
    file {
        path => "/var/log/httpd/access_log"
        start_position => "beginning"
        type => "apache-accesslog"
    }
    file{
        path => "/var/log/elasticsearch/myes.log"
        type => "es-log"
        start_position => "beginning"
        codec => multiline{
          pattern => "^\["
          negate => true
          what => "previous"
        }
    }
}
output {
    if [type] == "apache-accesslog" {
        redis {
        host => "192.168.56.12"
        port => "6379"
        db => "6"
        data_type => "list"
        key => "apache-accesslog"
}
    if [type] == "es-log"{
    redis {
        host => "192.168.56.12"
        port => "6379"
        db => "6"
        data_type => "list"
        key => "es-log"
        }
    }
}启动logstash
[root@linux-node1 /etc/logstash/conf.d]# /opt/logstash/bin/logstash -f apache.conf
Settings: Default pipeline workers: 4
Pipeline main started在redis上查看是否已经写入数据,连接redis,选择db 6,执行keys *
192.168.56.12:6379[6]> keys *
1) "es-log"
2) "demo"
3) "apache-accesslog"
192.168.56.12:6379[6]> llen es-log
(integer) 44
192.168.56.12:6379[6]> lindex es-log -1
"{\"@timestamp\":\"2016-09-04T20:34:23.717Z\",\"message\":\"[2016-09-05 03:52:18,878][INFO ][cluster.routing.allocation] [linux-node1] Cluster health status changed from [YELLOW] to [GREEN] (reason: [shards started [[system-log-2016.09][3]] ...]).\",\"@version\":\"1\",\"path\":\"/var/log/elasticsearch/myes.log\",\"host\":\"linux-node1\",\"type\":\"es-log\"}"
192.168.56.12:6379[6]> llen apache-acccesslog
(integer) 0
192.168.56.12:6379[6]> lindex apache-accesslog -1
"{\"message\":\"papapapa\",\"@version\":\"1\",\"@timestamp\":\"2016-09-04T20:46:03.164Z\",\"path\":\"/var/log/httpd/access_log\",\"host\":\"linux-node1\",\"type\":\"apache-accesslog\"}"可以看到不仅能收取apache-accesslog,还能收入es-log
3、实现从redis读取数据
在192.168.56.12上编写/etc/logstash/conf.d/input_redis.conf配置文件,并打印到标准输出
[root@linux-node2 conf.d]# cat input_redis.conf
input{
    redis {
type => "apache-accesslog"
        host => "192.168.56.12"
        port => "6379"
        db => "6"
        data_type => "list"
        key => "apache-accesslog"
    }
    redis {
        type => "es-log"
        host => "192.168.56.12"
        port => "6379"
        db => "6"
        data_type => "list"
        key => "es-log"
    }
}
output{
    stdout {
codec => rubydebug
    }
}启动logstash,logstash启动后,从redis读取的日志内容会立即打印到标准输出
[root@linux-node2 conf.d]# /opt/logstash/bin/logstash -f input_redis.conf
Settings: Default pipeline workers: 1
Pipeline main started
{
       "message" => "::1 - - [05/Sep/2016:03:28:31 +0800] \"OPTIONS * HTTP/1.0\" 200 - \"-\" \"Apache/2.4.6 (CentOS) OpenSSL/1.0.1e-fips PHP/5.4.16 mod_wsgi/3.4 Python/2.7.5 (internal dummy connection)\"",
      "@version" => "1",
    "@timestamp" => "2016-09-04T20:34:22.928Z",
          "path" => "/var/log/httpd/access_log",
          "host" => "linux-node1",
          "type" => "apache-accesslog"
}
{
       "message" => "::1 - - [05/Sep/2016:03:28:31 +0800] \"OPTIONS * HTTP/1.0\" 200 - \"-\" \"Apache/2.4.6 (CentOS) OpenSSL/1.0.1e-fips PHP/5.4.16 mod_wsgi/3.4 Python/2.7.5 (internal dummy connection)\"",
      "@version" => "1",
    "@timestamp" => "2016-09-04T20:34:23.266Z",
          "path" => "/var/log/httpd/access_log",
          "host" => "linux-node1",
          "type" => "apache-accesslog"
}
{
    "@timestamp" => "2016-09-04T20:34:22.942Z",
       "message" => "[2016-09-05 01:06:18,066][WARN ][monitor.jvm              ] [linux-node1] [gc][young][10361][1114] duration [8.4s], collections [1]/[8.5s], total [8.4s]/[27.8s], memory [172.1mb]->[108.5mb]/[990.7mb], all_pools {[young] [66.4mb]->[1.5mb]/[266.2mb]}{[survivor] [3.8mb]->[4.9mb]/[33.2mb]}{[old] [101.8mb]->[102.2mb]/[691.2mb]}",
      "@version" => "1",
          "path" => "/var/log/elasticsearch/myes.log",
          "host" => "linux-node1",
          "type" => "es-log"
}
{
    "@timestamp" => "2016-09-04T20:34:23.277Z",
       "message" => "[2016-09-05 03:39:50,356][INFO ][node                     ] [linux-node1] stopping ...",
      "@version" => "1",
          "path" => "/var/log/elasticsearch/myes.log",
          "host" => "linux-node1",
          "type" => "es-log"
}4、格式化apache的日志
使用filter插件中的grok插件,格式化apache日志
[root@linux-node2 conf.d]# cat input_redis.conf
input{
    redis {
type => "apache-accesslog"
        host => "192.168.56.12"
        port => "6379"
        db => "6"
        data_type => "list"
        key => "apache-accesslog"
    }
}
filter {
    grok {
        match => { "message" => "%{COMBINEDAPACHELOG}" }
    }
}
output{
    stdout {
codec => rubydebug
    }
}重新启动logstash,并查看标准输出,日志内容已变成json格式
[root@linux-node2 conf.d]# /opt/logstash/bin/logstash -f input_redis.conf
Settings: Default pipeline workers: 1
Pipeline main started
{
        "message" => "192.168.56.1 - - [05/Sep/2016:05:31:39 +0800] \"GET / HTTP/1.1\" 200 67 \"-\" \"Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/46.0.2490.86 Safari/537.36\"",
       "@version" => "1",
     "@timestamp" => "2016-09-04T21:31:40.819Z",
           "path" => "/var/log/httpd/access_log",
           "host" => "linux-node1",
           "type" => "apache-accesslog",
       "clientip" => "192.168.56.1",
          "ident" => "-",
           "auth" => "-",
      "timestamp" => "05/Sep/2016:05:31:39 +0800",
           "verb" => "GET",
        "request" => "/",
    "httpversion" => "1.1",
       "response" => "200",
          "bytes" => "67",
       "referrer" => "\"-\"",
          "agent" => "\"Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/46.0.2490.86 Safari/537.36\""
}
{
        "message" => "192.168.56.1 - - [05/Sep/2016:05:31:40 +0800] \"GET /favicon.ico HTTP/1.1\" 404 209 \"http://192.168.56.11:81/\" \"Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/46.0.2490.86 Safari/537.36\"",
       "@version" => "1",
     "@timestamp" => "2016-09-04T21:31:40.825Z",
           "path" => "/var/log/httpd/access_log",
           "host" => "linux-node1",
           "type" => "apache-accesslog",
       "clientip" => "192.168.56.1",
          "ident" => "-",
           "auth" => "-",
      "timestamp" => "05/Sep/2016:05:31:40 +0800",
           "verb" => "GET",
        "request" => "/favicon.ico",
    "httpversion" => "1.1",
       "response" => "404",
          "bytes" => "209",
       "referrer" => "\"http://192.168.56.11:81/\"",
          "agent" => "\"Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/46.0.2490.86 Safari/537.36\""
}5、实现通过redis收取日志
分别在192.168.56.11和192.168.56.12上面启动redis,在192.168.56.11上实现读取日志apache日志到redis,在192.168.56.12实现从redis读数据,并转为json格式,然后写入es
在192.168.56.11上的配置文件
[root@linux-node1 /var/log/httpd]# cat /etc/logstash/conf.d/apache.conf
input {
    file {
        path => "/var/log/httpd/access_log"
        start_position => "beginning"
type => "apache-accesslog"
    }   
}
output {
    if [type] == "apache-accesslog" {
redis {
        host => "192.168.56.12"
        port => "6379"
        db => "6"
        data_type => "list"
        key => "apache-accesslog"
        }
    }
}在192.168.56.12上的配置文件
[root@linux-node2 conf.d]# cat in_redis_out.conf
input{
    redis {
type => "apache-accesslog"
        host => "192.168.56.12"
        port => "6379"
        db => "6"
        data_type => "list"
        key => "apache-accesslog"
    }
}
filter {
    if [tyep] == "apache-accesslog"{
    grok {
        match => { "message" => "%{COMBINEDAPACHELOG}" }
        }
    }
}
output{
    if [type] == "apache-accesslog"{
        elasticsearch {
    hosts => ["192.168.56.11:9200"]
    index => "apache-accesslog-%{+YYYY.MM.dd}"
        }
    }
}启动logstash
[root@linux-node1 /etc/logstash/conf.d]# /opt/logstash/bin/logstash -f apache.conf
Settings: Default pipeline workers: 4
Pipeline main started[root@linux-node2 conf.d]# /opt/logstash/bin/logstash -f in_redis_out.conf
Settings: Default pipeline workers: 1
Pipeline main started注意:为了便于测试,我都是在前台启动;如果执行/etc/init.d/logstash start命令,这样会把目录/etc/logstash/conf.d/下的所有配置文件都加载,会对测试造成干扰。
6、通过ES查看日志内容
访问192.168.56.11上面的apache,以便于产生访问日志。
接下来访问http://192.168.56.11:9200/_plugin/head
我们可以看到apache-accesslog-2016.09.04的索引已经产生,如图所示:

  点击数据浏览


  





运维网声明 1、欢迎大家加入本站运维交流群:群②:261659950 群⑤:202807635 群⑦870801961 群⑧679858003
2、本站所有主题由该帖子作者发表,该帖子作者与运维网享有帖子相关版权
3、所有作品的著作权均归原作者享有,请您和我们一样尊重他人的著作权等合法权益。如果您对作品感到满意,请购买正版
4、禁止制作、复制、发布和传播具有反动、淫秽、色情、暴力、凶杀等内容的信息,一经发现立即删除。若您因此触犯法律,一切后果自负,我们对此不承担任何责任
5、所有资源均系网友上传或者通过网络收集,我们仅提供一个展示、介绍、观摩学习的平台,我们不对其内容的准确性、可靠性、正当性、安全性、合法性等负责,亦不承担任何法律责任
6、所有作品仅供您个人学习、研究或欣赏,不得用于商业或者其他用途,否则,一切后果均由您自己承担,我们对此不承担任何法律责任
7、如涉及侵犯版权等问题,请您及时通知我们,我们将立即采取措施予以解决
8、联系人Email:admin@iyunv.com 网址:www.yunweiku.com

所有资源均系网友上传或者通过网络收集,我们仅提供一个展示、介绍、观摩学习的平台,我们不对其承担任何法律责任,如涉及侵犯版权等问题,请您及时通知我们,我们将立即处理,联系人Email:kefu@iyunv.com,QQ:1061981298 本贴地址:https://www.yunweiku.com/thread-668571-1-1.html 上篇帖子: ELK5.4 修改分片数及分片分配方式 下篇帖子: ELK的搭建
您需要登录后才可以回帖 登录 | 立即注册

本版积分规则

扫码加入运维网微信交流群X

扫码加入运维网微信交流群

扫描二维码加入运维网微信交流群,最新一手资源尽在官方微信交流群!快快加入我们吧...

扫描微信二维码查看详情

客服E-mail:kefu@iyunv.com 客服QQ:1061981298


QQ群⑦:运维网交流群⑦ QQ群⑧:运维网交流群⑧ k8s群:运维网kubernetes交流群


提醒:禁止发布任何违反国家法律、法规的言论与图片等内容;本站内容均来自个人观点与网络等信息,非本站认同之观点.


本站大部分资源是网友从网上搜集分享而来,其版权均归原作者及其网站所有,我们尊重他人的合法权益,如有内容侵犯您的合法权益,请及时与我们联系进行核实删除!



合作伙伴: 青云cloud

快速回复 返回顶部 返回列表