xiaui520 发表于 2019-1-28 12:27:11

搭建部署 分布式ELK平台 (二)

  logstash
  

  · logstash 是什么
  – logstash是一个数据采集、加工处理以及传输的工具
  · logstash 特点:
  – 所有类型的数据集中处理
  – 不同模式和格式数据的正常化
  – 自定义日志格式的迅速扩展
  – 为自定义数据源轻松添加插件
  · logstash 安装
  – Logstash 依赖 java 环境,需要安装 java-1.8.0-openjdk
  – Logstash 没有默认的配置文件,需要手动配置
  – logstash 安装在 /opt/logstash 目录下
  # rpm -ivh logstash-2.3.4-1.noarch.rpm
  # rpm -qc logstash
  /etc/init.d/logstash
  /etc/logrotate.d/logstash
  /etc/sysconfig/logstash
  // 查看logstash 模块插件列表
  # /opt/logstash/bin/logstash-plugin list
  logstash-codec-collectd
  logstash-codec-json
  ....
  logstash-filter-anonymize
  logstash-filter-checksum
  ..   ..
  logstash-input-beats
  logstash-input-exec
  .. ..
  logstash-output-cloudwatch
  logstash-output-csv
  ....
  logstash-patterns-core
  第一列表示是 logstash 的模块
  第二列表示在那个区域段执行   codec 属于编解码 是字符编码类型的 在全部的区域段可以运行
  · logstash 工作结构
  – { 数据源 } ==>
  –            input { } ==>                //收集日志
  –                     filter { } ==>         //日志处理 整理格式
  –                            output { } ==>      //日志输出
  –                                             { ES }
  · logstash 里面的类型
  – 布尔值类型: ssl_enable => true
  – 字节类型:bytes => "1MiB"
  – 字符串类型: name => "xkops"
  – 数值类型: port => 22
  – 数组: match => ["datetime","UNIX"]
  – 哈希: options => {k => "v",k2 => "v2"}
  – 编码解码: codec => "json"
  – 路径: file_path => "/tmp/filename"
  – 注释: #
  · logstash 条件判断
  – 等于: ==
  – 不等于: !=
  – 小于: <
  – 大于: >
  – 小于等于: =
  – 匹配正则: =~
  – 不匹配正则: !~
  logstash 条件判断
  – 包含: in
  – 不包含: not in
  – 与: and
  – 或: or
  – 非与: nand
  – 非或: xor
  – 复合表达式: ()
  – 取反符合: !()
  配置 logastash
  # cd /etc/logstash/
  // logstash 默认没有配置文件
  # vim logstash.conf
  input{
  stdin{}            //标准输入
  }
  filter{}
  output{
  stdout{}      // 标准输出
  }
  # /opt/logstash/bin/logstash -f logstash.conf             // 相当cat
  Settings: Default pipeline workers: 1
  Pipeline main started
  Hello word!!!
  2018-01-26T13:21:22.031Z 0.0.0.0 Hello word!!!
  – 上页的配置文件使用了 logstash-input-stdin 和logstash-output-stdout 两个插件
  查看插件具体使用方法 的方式是 https://github.com/logstash-plugins
http://s1.运维网.com/images/20180127/1517053090921503.jpg
  练习1
  # vim logstash.conf
  input{
  stdin{ codec => &quot;json&quot; }
  }
  filter{}
  output{
  stdout{}
  }
  # /opt/logstash/bin/logstash -f logstash.conf
  Settings: Default pipeline workers: 1
  Pipeline main started
  12345                         // 123456 不是 json 模式 所有报错
  A plugin had an unrecoverable error. Will restart this plugin.
  Plugin: &quot;UTF-8&quot;>>
  Error: can't convert String into Integer {:level=>:error}
  'abc'
  2018-01-26T13:43:17.840Z 0.0.0.0 'abc'
  '{&quot;a&quot;:1,&quot;b&quot;:2}'
  2018-01-26T13:43:46.889Z 0.0.0.0 '{&quot;a&quot;:1,&quot;b&quot;:2}'
  练习2
  # vim logstash.conf
  input{
  stdin{ codec => &quot;json&quot; }
  }
  filter{}
  output{
  stdout{ codec => &quot;rubydebug&quot;}      //调试数据的模式
  }
  # /opt/logstash/bin/logstash -f logstash.conf
  Settings: Default pipeline workers: 1
  Pipeline main started
  '&quot;aaa&quot;'
  {
  &quot;message&quot; => &quot;'\&quot;aaa\&quot;'&quot;,
  &quot;tags&quot; => [
   &quot;_jsonparsefailure&quot;
  ],
  &quot;@version&quot; => &quot;1&quot;,
  &quot;@timestamp&quot; => &quot;2018-01-26T13:52:45.307Z&quot;,
  &quot;host&quot; => &quot;0.0.0.0&quot;
  }
  {&quot;aa&quot;:1,&quot;bb&quot;:2}
  {
  &quot;aa&quot; => 1,
  &quot;bb&quot; => 2,
  &quot;@version&quot; => &quot;1&quot;,
  &quot;@timestamp&quot; => &quot;2018-01-26T13:53:00.452Z&quot;,
  &quot;host&quot; => &quot;0.0.0.0&quot;
  }
  练习3
  # touch /tmp/a.log /tmp/b.log
  # vim logstash.conf
  input{
  file {                //监控文件
  path => [&quot;/tmp/a.log&quot;,&quot;/tmp/b.log&quot;]      //监控文件 路径
  type => &quot;testlog&quot;                //声明文件类型
  }
  }
  filter{}
  output{
  stdout{ codec => &quot;rubydebug&quot;}
  }
  # /opt/logstash/bin/logstash -f logstash.conf
  Settings: Default pipeline workers: 1
  Pipeline main started
  ...      //开始监控日志文件
  //切换到另一个终端测试
  # cd /tmp/
  // 为日志文件 添加内容
  # echo 12345 > a.log
  // 这时在 会看见终端输出监控信息
  {
  &quot;message&quot; => &quot;12345&quot;,
  &quot;@version&quot; => &quot;1&quot;,
  &quot;@timestamp&quot; => &quot;2018-01-27T00:45:15.470Z&quot;,
  &quot;path&quot; => &quot;/tmp/a.log&quot;,
  &quot;host&quot; => &quot;0.0.0.0&quot;,
  &quot;type&quot; => &quot;testlog&quot;
  }
  # echo b123456 > b.log
  {
  &quot;message&quot; => &quot;b123456&quot;,
  &quot;@version&quot; => &quot;1&quot;,
  &quot;@timestamp&quot; => &quot;2018-01-27T00:45:30.487Z&quot;,
  &quot;path&quot; => &quot;/tmp/b.log&quot;,
  &quot;host&quot; => &quot;0.0.0.0&quot;,
  &quot;type&quot; => &quot;testlog&quot;
  }
  # echo c123456 >> b.log
  {
  &quot;message&quot; => &quot;c123456&quot;,
  &quot;@version&quot; => &quot;1&quot;,
  &quot;@timestamp&quot; => &quot;2018-01-27T00:45:45.501Z&quot;,
  &quot;path&quot; => &quot;/tmp/b.log&quot;,
  &quot;host&quot; => &quot;0.0.0.0&quot;,
  &quot;type&quot; => &quot;testlog&quot;
  }
  // 默认记录 读取位置的文件 在管理员家目录 .sincedb_ 后面是一串哈希数
  # cat /root/.sincedb_ab3977c541d1144f701eedeb3af4956a
  3190503 0 64768 6
  3190504 0 64768 16
  # du -b /tmp/a.log             du -b 查看文件类型
  6/tmp/a.log
  # du -b /tmp/b.log
  16/tmp/b.log
  // 进行优化
  # vim logstash.conf
  input{
  file {
  start_position => &quot;beginning&quot;            //设置当记录位置的库文件不存在时 从文件开始读取
  sincedb_path => &quot;/var/lib/logstash/sincedb-access&quot;    //记录位置的库文件 默认放在每个用户下的 所以将记录位置的库文件固定位置
  path => [&quot;/tmp/a.log&quot;,&quot;/tmp/b.log&quot;]
  type => &quot;testlog&quot;
  }
  }
  filter{}
  output{
  stdout{ codec => &quot;rubydebug&quot;}
  }
  # rm -rf /root/.sincedb_ab3977c541d1144f701eedeb3af4956a
  # /opt/logstash/bin/logstash -f logstash.conf
  Settings: Default pipeline workers: 1
  Pipeline main started
  {
  &quot;message&quot; => &quot;12345&quot;,
  &quot;@version&quot; => &quot;1&quot;,
  &quot;@timestamp&quot; => &quot;2018-01-27T10:44:54.890Z&quot;,
  &quot;path&quot; => &quot;/tmp/a.log&quot;,
  &quot;host&quot; => &quot;0.0.0.0&quot;,
  &quot;type&quot; => &quot;testlog&quot;
  }
  {
  &quot;message&quot; => &quot;b123456&quot;,
  &quot;@version&quot; => &quot;1&quot;,
  &quot;@timestamp&quot; => &quot;2018-01-27T10:44:55.242Z&quot;,
  &quot;path&quot; => &quot;/tmp/b.log&quot;,
  &quot;host&quot; => &quot;0.0.0.0&quot;,
  &quot;type&quot; => &quot;testlog&quot;
  }
  {
  &quot;message&quot; => &quot;c123456&quot;,
  &quot;@version&quot; => &quot;1&quot;,
  &quot;@timestamp&quot; => &quot;2018-01-27T10:44:55.242Z&quot;,
  &quot;path&quot; => &quot;/tmp/b.log&quot;,
  &quot;host&quot; => &quot;0.0.0.0&quot;,
  &quot;type&quot; => &quot;testlog&quot;
  }
  练习 tcp udp 插件
  # vim logstash.conf
  input{
  tcp {
  host => &quot;0.0.0.0&quot;
  port => 8888
  type => &quot;tcplog&quot;
  }
  udp {
  host => &quot;0.0.0.0&quot;
  port => 9999
  type => &quot;udplog&quot;
  }
  }
  filter{}
  output{
  stdout{ codec => &quot;rubydebug&quot;}
  }
  # /opt/logstash/bin/logstash -f logstash.conf
  Settings: Default pipeline workers: 1
  Pipeline main started
  ...
  //在另一个终端查看监听端口
  # netstat -pantu | grep -E &quot;(8888|9999)&quot;
  tcp6       0      0 :::8888               :::*                  LISTEN      3098/java
  udp6       0      0 :::9999               :::*                              3098/java
  模拟客户端 发送数据报文
  // 发送tcp 数据报文   exec 改变当前文件描述符 标准输入 标准输出重新设置
  # exec 9/dev/tcp/192.168.4.10/8888            //打开建立连接
  # echo &quot;hello world&quot; >&9                                    //发送 hello world 字符串 给连接
  # exec 9 &quot;hello world&quot;,
  &quot;@version&quot; => &quot;1&quot;,
  &quot;@timestamp&quot; => &quot;2018-01-27T11:01:35.356Z&quot;,
  &quot;host&quot; => &quot;192.168.4.11&quot;,
  &quot;port&quot; => 48654,
  &quot;type&quot; => &quot;tcplog&quot;
  }
  // 发送udp 数据报文
  # exec 7/dev/udp/192.168.4.10/9999
  # echo &quot;is udp log&quot; >&7
  # exec 7 &quot;is udp log\n&quot;,
  &quot;@version&quot; => &quot;1&quot;,
  &quot;@timestamp&quot; => &quot;2018-01-27T11:05:18.850Z&quot;,
  &quot;type&quot; => &quot;udplog&quot;,
  &quot;host&quot; => &quot;192.168.4.11&quot;
  }
  // 发送文件
  # exec 8/dev/udp/192.168.4.10/9999
  # cat /etc/hosts >&8
  # exec 9 &quot;127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4\n::1         localhost localhost.localdomain localhost6 localhost6.localdomain6\n192.168.4.11\tes1\n192.168.4.12\tes2\n192.168.4.13\tes3\n192.168.4.14\tes4\n192.168.4.15\tes5\n&quot;,
  &quot;@version&quot; => &quot;1&quot;,
  &quot;@timestamp&quot; => &quot;2018-01-27T11:10:31.099Z&quot;,
  &quot;type&quot; => &quot;udplog&quot;,
  &quot;host&quot; => &quot;192.168.4.11&quot;
  }
  syslog 插件练习
  # vim logstash.conf
  input{
  syslog{
  host => &quot;192.168.4.10&quot;
  port => 514                  //系统日志默认端口
  type => &quot;syslog&quot;
  }
  }
  filter{}
  output{
  stdout{ codec => &quot;rubydebug&quot;}
  }
  #/opt/logstash/bin/logstash -f logstash.conf
  Settings: Default pipeline workers: 1
  Pipeline main started
  . . .
  # netstat -pantu | grep 514
  tcp6       0      0 192.168.4.10:514      :::*                  LISTEN      3545/java
  udp6       0      0 192.168.4.10:514      :::*                              3545/java
  //在客户端主机上 自定义日志文件 发给logstash 主机
  # vim /etc/rsyslog.conf
  # sed -n '74p' /etc/rsyslog.conf
  local0.info@@192.168.4.10:514
  # systemctl restart rsyslog.service
  # logger -p local0.info -t testlog &quot;hello world&quot;      // 发送一条测试日志
  // logstash 主机收到日志
  {
  &quot;message&quot; => &quot;hello world\n&quot;,
  &quot;@version&quot; => &quot;1&quot;,
  &quot;@timestamp&quot; => &quot;2018-01-27T11:29:03.000Z&quot;,
  &quot;type&quot; => &quot;syslog&quot;,
  &quot;host&quot; => &quot;192.168.4.11&quot;,
  &quot;priority&quot; => 134,
  &quot;timestamp&quot; => &quot;Jan 27 06:29:03&quot;,
  &quot;logsource&quot; => &quot;es1&quot;,
  &quot;program&quot; => &quot;testlog&quot;,
  &quot;severity&quot; => 6,
  &quot;facility&quot; => 16,
  &quot;facility_label&quot; => &quot;local0&quot;,
  &quot;severity_label&quot; => &quot;Informational&quot;
  }
  扩展 如果想要把登录日志发给 logstash
  # vim /etc/rsyslog.conf
  # sed -n '75p' /etc/rsyslog.conf
  authpriv.*@@192.168.4.10:514
  # systemctl restart rsyslog.service
  //测试登录 登出
  # exit
  登出
  Connection to 192.168.4.11 closed.
  # ssh -X root@192.168.4.11
  root@192.168.4.11's password:
  Last login: Sat Jan 27 05:27:21 2018 from 192.168.4.254
  //实时接受的日志
  {
  &quot;message&quot; => &quot;Accepted password for root from 192.168.4.254 port 50820 ssh2\n&quot;,
  &quot;@version&quot; => &quot;1&quot;,
  &quot;@timestamp&quot; => &quot;2018-01-27T11:32:07.000Z&quot;,
  &quot;type&quot; => &quot;syslog&quot;,
  &quot;host&quot; => &quot;192.168.4.11&quot;,
  &quot;priority&quot; => 86,
  &quot;timestamp&quot; => &quot;Jan 27 06:32:07&quot;,
  &quot;logsource&quot; => &quot;es1&quot;,
  &quot;program&quot; => &quot;sshd&quot;,
  &quot;pid&quot; => &quot;3734&quot;,
  &quot;severity&quot; => 6,
  &quot;facility&quot; => 10,
  &quot;facility_label&quot; => &quot;security/authorization&quot;,
  &quot;severity_label&quot; => &quot;Informational&quot;
  }
  {
  &quot;message&quot; => &quot;pam_unix(sshd:session): session opened for user root by (uid=0)\n&quot;,
  &quot;@version&quot; => &quot;1&quot;,
  &quot;@timestamp&quot; => &quot;2018-01-27T11:32:07.000Z&quot;,
  &quot;type&quot; => &quot;syslog&quot;,
  &quot;host&quot; => &quot;192.168.4.11&quot;,
  &quot;priority&quot; => 86,
  &quot;timestamp&quot; => &quot;Jan 27 06:32:07&quot;,
  &quot;logsource&quot; => &quot;es1&quot;,
  &quot;program&quot; => &quot;sshd&quot;,
  &quot;pid&quot; => &quot;3734&quot;,
  &quot;severity&quot; => 6,
  &quot;facility&quot; => 10,
  &quot;facility_label&quot; => &quot;security/authorization&quot;,
  &quot;severity_label&quot; => &quot;Informational&quot;
  }
  · filter grok插件
  – 解析各种非结构化的日志数据插件
  – grok 使用正则表达式把飞结构化的数据结构化
  – 在分组匹配,正则表达式需要根据具体数据结构编写
  – 虽然编写困难,但适用性极广
  – 几乎可以应用于各类数据
  · grok 正则分组匹配
  – 匹配 ip 时间戳 和 请求方法
  &quot;(?(\d+\.){3}\d+) \S+ \S+
  (?.*\])\s+\&quot;(?+)&quot;]
  – 使用正则宏
  %{IPORHOST:clientip} %{HTTPDUSER:ident} %{USER:auth}
  \[%{HTTPDATE:timestamp}\] \&quot;%{WORD:verb}
  – 最终版本
  %{COMMONAPACHELOG} \&quot;(?[^\&quot;]+)\&quot;
  \&quot;(?[^\&quot;]+)\&quot;
  练习 匹配Apache 日志
  # cd /opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-patterns-core-2.0.5/patterns/
  # vim grok-patterns   //日志宏定义仓库
  ....
  COMMONAPACHELOG %{IPORHOST:clientip} %{HTTPDUSER:ident} %{USER:auth} \[%{HTTPDATE:timestamp}\] &quot;(?:%{WORD:verb} %{NOTSPACE:request}(?: HTTP/%{NUMBER:httpversion})?|%{DATA:rawrequest})&quot; %{NUMBER:response} (?:%{NUMBER:bytes}|-)
  ....
  # vim /tmp/test.log//在测试文件中添加一条日志
  220.181.108.115 - - &quot;GET /%B8%DF%BC%B6%D7%E2%C1%DE%D6%F7%C8%CE/QYQiu_j.html HTTP/1.1&quot; 200 20756 &quot;-&quot; &quot;Mozilla/5.0 (compatible; Baiduspider/2.0; +http://www.baidu.com/search/spider.html)&quot; &quot;-&quot; zxzs.buildhr.com 558742
  

  # vim logstash.conf
  input{
  file{
  start_position => &quot;beginning&quot;
  sincedb_path => &quot;/dev/null&quot;      //为了调试方便
  path => [ &quot;/tmp/test.log&quot; ]
  type => 'filelog'
  }
  }

  filter{
  grok{
  match => [&quot;message&quot;,&quot;%{COMMONAPACHELOG}&quot;]
  }
  }
  output{
  stdout{ codec => &quot;rubydebug&quot;}
  }
  # /opt/logstash/bin/logstash -f logstash.conf Settings: Default pipeline workers: 1
  Pipeline main started
  {
  &quot;message&quot; => &quot;220.181.108.115 - - \&quot;GET /%B8%DF%BC%B6%D7%E2%C1%DE%D6%F7%C8%CE/QYQiu_j.html HTTP/1.1\&quot; 200 20756 \&quot;-\&quot; \&quot;Mozilla/5.0 (compatible; Baiduspider/2.0; +http://www.baidu.com/search/spider.html)\&quot; \&quot;-\&quot; zxzs.buildhr.com 558742&quot;,
  &quot;@version&quot; => &quot;1&quot;,
  &quot;@timestamp&quot; => &quot;2018-01-27T12:03:10.363Z&quot;,
  &quot;path&quot; => &quot;/tmp/test.log&quot;,
  &quot;host&quot; => &quot;0.0.0.0&quot;,
  &quot;type&quot; => &quot;filelog&quot;,
  &quot;clientip&quot; => &quot;220.181.108.115&quot;,
  &quot;ident&quot; => &quot;-&quot;,
  &quot;auth&quot; => &quot;-&quot;,
  &quot;timestamp&quot; => &quot;11/Jul/2017:03:07:16 +0800&quot;,
  &quot;verb&quot; => &quot;GET&quot;,
  &quot;request&quot; => &quot;/%B8%DF%BC%B6%D7%E2%C1%DE%D6%F7%C8%CE/QYQiu_j.html&quot;,
  &quot;httpversion&quot; => &quot;1.1&quot;,
  &quot;response&quot; => &quot;200&quot;,
  &quot;bytes&quot; => &quot;20756&quot;
  }
  {
  &quot;message&quot; => &quot;&quot;,
  &quot;@version&quot; => &quot;1&quot;,
  &quot;@timestamp&quot; => &quot;2018-01-27T12:03:10.584Z&quot;,
  &quot;path&quot; => &quot;/tmp/test.log&quot;,
  &quot;host&quot; => &quot;0.0.0.0&quot;,
  &quot;type&quot; => &quot;filelog&quot;,
  &quot;tags&quot; => [
   &quot;_grokparsefailure&quot;
  ]
  }
  编写分析日志正则时 先区宏库中寻找 找不到可以 去百度尽量不要自己手动写 会很累
  练习 同时解析不同日志
  # vim logstash.conf
  input{
  file{
  start_position => &quot;beginning&quot;
  sincedb_path => &quot;/dev/null&quot;
  path => [ &quot;/tmp/test.log&quot; ]
  type => 'filelog'
  }
  file{
  start_position => &quot;beginning&quot;
  sincedb_path => &quot;/dev/null&quot;
  path => [ &quot;/tmp/test.json&quot; ]
  type => 'jsonlog'
  codec => 'json'
  }
  }
  filter{
  if == &quot;filelog&quot;{
  grok{
  match => [&quot;message&quot;,&quot;%{COMMONAPACHELOG}&quot;]
  }
  }
  }
  output{
  stdout{ codec => &quot;rubydebug&quot;}
  }
  # /opt/logstash/bin/logstash -f logstash.conf
  Settings: Default pipeline workers: 1
  Pipeline main started
  {
  &quot;message&quot; => &quot;220.181.108.115 - - \&quot;GET /%B8%DF%BC%B6%D7%E2%C1%DE%D6%F7%C8%CE/QYQiu_j.html HTTP/1.1\&quot; 200 20756 \&quot;-\&quot; \&quot;Mozilla/5.0 (compatible; Baiduspider/2.0; +http://www.baidu.com/search/spider.html)\&quot; \&quot;-\&quot; zxzs.buildhr.com 558742&quot;,
  &quot;@version&quot; => &quot;1&quot;,
  &quot;@timestamp&quot; => &quot;2018-01-27T12:22:06.481Z&quot;,
  &quot;path&quot; => &quot;/tmp/test.log&quot;,
  &quot;host&quot; => &quot;0.0.0.0&quot;,
  &quot;type&quot; => &quot;filelog&quot;,
  &quot;clientip&quot; => &quot;220.181.108.115&quot;,
  &quot;ident&quot; => &quot;-&quot;,
  &quot;auth&quot; => &quot;-&quot;,
  &quot;timestamp&quot; => &quot;11/Jul/2017:03:07:16 +0800&quot;,
  &quot;verb&quot; => &quot;GET&quot;,
  &quot;request&quot; => &quot;/%B8%DF%BC%B6%D7%E2%C1%DE%D6%F7%C8%CE/QYQiu_j.html&quot;,
  &quot;httpversion&quot; => &quot;1.1&quot;,
  &quot;response&quot; => &quot;200&quot;,
  &quot;bytes&quot; => &quot;20756&quot;
  }
  {
  &quot;message&quot; => &quot;&quot;,
  &quot;@version&quot; => &quot;1&quot;,
  &quot;@timestamp&quot; => &quot;2018-01-27T12:22:06.834Z&quot;,
  &quot;path&quot; => &quot;/tmp/test.log&quot;,
  &quot;host&quot; => &quot;0.0.0.0&quot;,
  &quot;type&quot; => &quot;filelog&quot;,
  &quot;tags&quot; => [
   &quot;_grokparsefailure&quot;
  ]
  }
  {
  &quot;@timestamp&quot; => &quot;2015-05-18T12:28:25.013Z&quot;,
  &quot;ip&quot; => &quot;79.1.14.87&quot;,
  &quot;extension&quot; => &quot;gif&quot;,
  &quot;response&quot; => &quot;200&quot;,
  &quot;geo&quot; => {
  &quot;coordinates&quot; => {
  &quot;lat&quot; => 35.16531472,
  &quot;lon&quot; => -107.9006142
  },
  &quot;src&quot; => &quot;GN&quot;,
  &quot;dest&quot; => &quot;US&quot;,
  &quot;srcdest&quot; => &quot;GN:US&quot;
  },
  &quot;@tags&quot; => [
   &quot;success&quot;,
   &quot;info&quot;
  ],
  &quot;utc_time&quot; => &quot;2015-05-18T12:28:25.013Z&quot;,
  &quot;referer&quot; => &quot;http://www.slate.com/warning/b-alvin-drew&quot;,
  &quot;agent&quot; => &quot;Mozilla/5.0 (X11; Linux i686) AppleWebKit/534.24 (KHTML, like Gecko) Chrome/11.0.696.50 Safari/534.24&quot;,
  &quot;clientip&quot; => &quot;79.1.14.87&quot;,
  &quot;bytes&quot; => 774,
  &quot;host&quot; => &quot;motion-media.theacademyofperformingartsandscience.org&quot;,
  &quot;request&quot; => &quot;/canhaz/james-mcdivitt.gif&quot;,
  &quot;url&quot; => &quot;https://motion-media.theacademyofperformingartsandscience.org/canhaz/james-mcdivitt.gif&quot;,
  &quot;@message&quot; => &quot;79.1.14.87 - - \&quot;GET /canhaz/james-mcdivitt.gif HTTP/1.1\&quot; 200 774 \&quot;-\&quot; \&quot;Mozilla/5.0 (X11; Linux i686) AppleWebKit/534.24 (KHTML, like Gecko) Chrome/11.0.696.50 Safari/534.24\&quot;&quot;,
  &quot;spaces&quot; => &quot;this   is   a   thing    with lots of   spaces       wwwwoooooo&quot;,
  &quot;xss&quot; => &quot;console.log(\&quot;xss\&quot;)&quot;,
  &quot;headings&quot; => [
   &quot;charles-bolden&quot;,
   &quot;http://www.slate.com/success/barry-wilmore&quot;
  ],
  &quot;links&quot; => [
   &quot;george-nelson@twitter.com&quot;,
   &quot;http://facebook.com/info/anatoly-solovyev&quot;,
   &quot;www.www.slate.com&quot;
  ],
  &quot;relatedContent&quot; => [],
  &quot;machine&quot; => {
  &quot;os&quot; => &quot;osx&quot;,
  &quot;ram&quot; => 8589934592
  },
  &quot;@version&quot; => &quot;1&quot;,
  &quot;path&quot; => &quot;/tmp/test.json&quot;,
  &quot;type&quot; => &quot;jsonlog&quot;
  }
  真正在项目中可以和开放人员商量 让其提供json 格式的日志 这样能轻松很多

  output ES 插件
  调试成功后,把数据写入 ES 集群
  # cat logstash.conf
  input{
  file{
  start_position => &quot;beginning&quot;
  sincedb_path => &quot;/dev/null&quot;
  path => [ &quot;/tmp/test.log&quot; ]
  type => 'filelog'
  }
  file{
  start_position => &quot;beginning&quot;
  sincedb_path => &quot;/dev/null&quot;
  path => [ &quot;/tmp/test.json&quot; ]
  type => 'jsonlog'
  codec => 'json'
  }
  }
  filter{
  if == &quot;filelog&quot;{
  grok{
  match => [&quot;message&quot;,&quot;%{COMMONAPACHELOG}&quot;]
  }
  }
  }
  output{
  if == &quot;filelog&quot;{
  elasticsearch {
  hosts => [&quot;192.168.4.11:9200&quot;,&quot;192.168.4.12:9200&quot;]
  index => &quot;weblog&quot;
  flush_size => 2000
  idle_flush_time => 10
  }
  }
  }
  # /opt/logstash/bin/logstash -f logstash.conf
  Settings: Default pipeline workers: 1
  Pipeline main started
http://s1.运维网.com/images/20180127/1517057325973647.jpg
http://s1.运维网.com/images/20180127/1517057342972687.jpg



页: [1]
查看完整版本: 搭建部署 分布式ELK平台 (二)