1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
| 一、环境
1、RPM
1)收集 rpm 包
wget https://artifacts.elastic.co/dow ... ticsearch-5.2.0.rpm
wget https://artifacts.elastic.co/dow ... na-5.2.0-x86_64.rpm
wget https://artifacts.elastic.co/dow ... /logstash-5.2.0.rpm
wget https://artifacts.elastic.co/dow ... at-5.2.0-x86_64.rpm
2)缓存rpm包到本地yum源
2、安装
【服务端】
1)ELK
[iyunv@vm220 ~]# yum install elasticsearch kibana logstash -y
2)jdk
(略)
【客户端】
1)filebeat
[iyunv@vm49 ~]# yum install filebeat -y
3、前提
假设要收集下面2个域名的 access 和 error 日志:
www.test.com
www.work.com
其中 access 日志的格式如下:
log_format online '$remote_addr [$time_local] "$request" '
'"$http_content_type" "$request_body" "$http_referer" '
'$status $request_time $body_bytes_sent';
而 error 日志采取默认的级别(error)。
且要求:为每个域名使用独立的 index
二、ELK 服务端配置
1、elasticsearch
1)配置文件
[iyunv@vm220 ~]# mkdir -p /data/elasticsearch
[iyunv@vm220 ~]# chown elasticsearch:elasticsearch /data/elasticsearch
[iyunv@vm220 ~]# cp -a /etc/elasticsearch/elasticsearch.yml{,.bak}
调整配置文件:
【如果 ES 是单节点】
[iyunv@vm220 ~]# grep ^[^#] /etc/elasticsearch/elasticsearch.yml
cluster.name: es-cluster-test
node.name: node-vm220
path.data: /data/elasticsearch
path.logs: /var/log/elasticsearch
bootstrap.system_call_filter: false
network.host: 10.50.200.220
【如果 elasticsearch 是集群】
[iyunv@vm220 ~]# grep ^[^#] /etc/elasticsearch/elasticsearch.yml
cluster.name: es-cluster-test
node.name: node-vm220
path.data: /data/elasticsearch
path.logs: /var/log/elasticsearch
bootstrap.system_call_filter: false
network.host: 10.50.200.220
discovery.zen.ping.unicast.hosts: ["10.50.200.218", "10.50.200.219", "10.50.200.220"]
discovery.zen.minimum_master_nodes: 3
其他节点类似
【特别说明】bootstrap.system_call_filter: false
由于内核限制,在 centos6 下无法安装 syscall filter 报错信息如下:
[2017-02-13T14:14:00,689][WARN ][o.e.b.JNANatives ] unable to install syscall filter:
java.lang.UnsupportedOperationException: seccomp unavailable: requires kernel 3.5+ with CONFIG_SECCOMP and CONFIG_SECCOMP_FILTER compiled in
2)启动服务
[iyunv@vm220 ~]# service elasticsearch start
[iyunv@vm220 ~]# chkconfig elasticsearch on
2、kibana
1)配置文件
[iyunv@vm220 ~]# grep ^[^#] /etc/kibana/kibana.yml
server.host: "10.50.200.220"
server.name: "es-cluster-test-kibana"
elasticsearch.url: "http://10.50.200.220:9200"
2)启动服务
[iyunv@vm220 ~]# service kibana restart
[iyunv@vm220 ~]# chkconfig kibana on
3)访问
http://10.50.200.220:5601/app/kibana
3、logstash
1)配置自定义的 pattern
[iyunv@vm220 ~]# mkdir -p /etc/logstash/patterns.d
[iyunv@vm220 ~]# cat /etc/logstash/patterns.d/extra_patterns
NGINXACCESS %{IPORHOST:clientip} \[%{HTTPDATE:timestamp}\] "%{WORD:verb} %{URIPATHPARAM:request} HTTP/%{NUMBER:httpversion}" (?:%{QS:content_type}|-) (?:%{QS:request_body}|-) (?:"(?:%{URI:referrer}|-)"|%{QS:referrer}) %{NUMBER:response} %{BASE16FLOAT:request_time} (?:%{NUMBER:bytes}|-)
NGINXERROR_DATESTAMP %{YEAR}/%{MONTHNUM}/%{MONTHDAY} %{TIME}
NGINXERROR_PID (?:[0-9]+#[0-9]+\:)
NGINXERROR_TID (?:\*[0-9]+)
NGINXERROR %{NGINXERROR_DATESTAMP:timestamp} \[%{LOGLEVEL:loglevel}\] %{NGINXERROR_PID:pid} %{NGINXERROR_TID:tid} %{GREEDYDATA:errormsg}, client: %{IPORHOST:clientip}, server: %{HOSTNAME:server}, request: %{QS:request}(?:, upstream: %{QS:upstream})?, host: \"%{HOSTNAME:hostname}\"(?:, referrer: (?:"(?:%{URI:referrer}|-)"|%{QS:referrer}))?
[iyunv@vm220 ~]# grep ^[^#] /etc/logstash/logstash.yml
path.data: /var/lib/logstash
path.config: /etc/logstash/conf.d
path.logs: /var/log/logstash
【特别说明】geolite已经更新,格式有变更,请下载最新的版本。
cd /etc/logstash/ && mkdir geoip && cd geoip
wget http://geolite.maxmind.com/downl ... oLite2-City.mmdb.gz
gunzip GeoLite2-City.mmdb.gz
2)调整配置文件
[iyunv@vm220 logstash]# cat conf.d/filebeat.conf
input {
beats {
port => "5044"
}
}
filter {
if[type] =~ "NginxAccess-" {
grok {
patterns_dir => ["/etc/logstash/patterns.d"]
match => {
"message" => "%{NGINXACCESS}"
}
}
date {
match => [ "timestamp", "dd/MMM/YYYY:HH:mm:ss Z" ]
remove_field => [ "timestamp" ]
}
geoip {
source => "clientip"
target => "geoip"
database => "/etc/logstash/geoip/GeoLite2-City.mmdb"
}
}
if[type] =~ "NginxError-" {
grok {
patterns_dir => ["/etc/logstash/patterns.d"]
match => {
"message" => "%{NGINXERROR}"
}
}
date {
match => [ "timestamp", "YYYY/MM/dd HH:mm:ss" ]
remove_field => [ "timestamp" ]
}
geoip {
source => "clientip"
target => "geoip"
database => "/etc/logstash/geoip/GeoLite2-City.mmdb"
}
}
}
output {
if[type] == "NginxAccess-www.test.com" {
elasticsearch {
hosts => "10.50.200.220:9200"
manage_template => false
index => "%{[@metadata][beat]}-nginxaccess-www.test.com-%{+YYYY.MM.dd}"
document_type => "%{[@metadata][type]}"
}
}
if[type] == "NginxAccess-www.work.com" {
elasticsearch {
hosts => "10.50.200.220:9200"
manage_template => false
index => "%{[@metadata][beat]}-nginxaccess-www.work.com-%{+YYYY.MM.dd}"
document_type => "%{[@metadata][type]}"
}
}
if[type] == "NginxError-www.test.com" {
elasticsearch {
hosts => "10.50.200.220:9200"
manage_template => false
index => "%{[@metadata][beat]}-nginxerror-www.test.com-%{+YYYY.MM.dd}"
document_type => "%{[@metadata][type]}"
}
}
if[type] == "NginxError-www.work.com" {
elasticsearch {
hosts => "10.50.200.220:9200"
manage_template => false
index => "%{[@metadata][beat]}-nginxerror-www.work.com-%{+YYYY.MM.dd}"
document_type => "%{[@metadata][type]}"
}
}
}
3)启动服务
centos6下使用 upstart 来启动服务:
[iyunv@vm220 ~]# initctl restart logstash
centos7下使用 systemd 来启动服务:
[iyunv@vm220 ~]# systemctl start logstash.service
4、filebeat
1)配置文件
[iyunv@vm49 ~]# cat /etc/filebeat/filebeat.yml
filebeat.prospectors:
- input_type: log
paths:
- /var/log/nginx/access.www.test.com*.log
document_type: NginxAccess-www.test.com
- input_type: log
paths:
- /var/log/nginx/access.www.work.com*.log
document_type: NginxAccess-www.work.com
- input_type: log
paths:
- /var/log/nginx/error.www.test.com*.log
document_type: NginxError-www.test.com
- input_type: log
paths:
- /var/log/nginx/error.www.work.com*.log
document_type: NginxError-www.work.com
output.logstash:
hosts: ["10.50.200.220:5044"]
2)启动服务
[iyunv@vm49 ~]# service filebeat restart
[iyunv@vm49 ~]# chkconfig filebeat on
3)导入安装 filebeat 时,自带的模版
模版路径:/etc/filebeat/filebeat.template.json
自己可以在默认的模版的基础上做调整,例如,对比默认配置,新增的内容为:
(略)
"dynamic_templates": [
{
"strings_as_keyword": {
"mapping": {
"ignore_above": 1024,
"type": "keyword"
},
"match_mapping_type": "string"
}
},
{
"all_as_doc_values": {
"mapping": {
"doc_values": true,
"ignore_above": 1024,
"index": "not_analyzed",
"type": "{dynamic_type}"
},
"match": "*"
}
}
],
(略)
"type": {
"ignore_above": 1024,
"type": "keyword"
},
"bytes" : {
"type" : "long",
"index": "no"
},
"geoip" : {
"properties" : {
"location" : {
"type" : "geo_point",
"index": "no"
}
}
}
(略)
a、导入模版
[iyunv@vm220 ~]# curl -XPUT 'http://10.50.200.220:9200/_template/filebeat?pretty' -d@/etc/filebeat/filebeat.template.json
b、查看模版
[iyunv@vm220 ~]# curl 'http://10.50.200.220:9200/_template/filebeat?pretty'
c、清理旧的 index(如果是新配置的服务,没有生成任何 index 因此也不需要清理,可略过这一步)
先查看现有的 index
[iyunv@vm220 ~]# curl '10.50.200.220:9200/_cat/indices?v'
删除 filebeat-* 匹配的所有 index
[iyunv@vm220 ~]# curl -XDELETE 'http://10.50.200.220:9200/filebeat-*?pretty'
再次查看,确认一下结果是否符合预期:
[iyunv@vm220 ~]# curl '10.50.200.220:9200/_cat/indices?v'
ZYXW、参考
1、logstash
https://www.elastic.co/guide/en/ ... nning-logstash.html
2、geoip
https://github.com/elastic/logstash/issues/6167
|