ELK(elasticsearch+logstash+kibana)开源日志分析平台搭建
环境介绍System: CentOS7.2 x86_64
hostname: elk-server.huangming.org
IP Address: 10.0.6.42、10.17.83.42
本篇的ELK环境为单机部署方式,即将ELK所有的软件包都安装在一台服务器上,配置如下:
CPU: 4c
Mem: 8G
Disk:50
一、Elasticsearch安装
1、安装jdk 1.8及以上的版本(安装过程略)
1
2
3
4
# java -version
java version "1.8.0_65"
Java(TM) SE Runtime Environment (build 1.8.0_65-b17)
Java HotSpot(TM) 64-Bit Server VM (build 25.65-b01, mixed mode)
2、下载Elasticsearch最新版本(目前5.5版本)
1
2
3
4
使用curl命令下载tar包
# curl -L -O https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-5.5.2.tar.gz
也可以使用wget命令下载
# wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-5.5.2.tar.gz
3、解压至指定目录(安装目录)/usr/local/下,并将其重命名为elasticsearch (完整的目录应该为/usr/local/elasticsearch)
1
2
3
# tar -zxf elasticsearch-5.5.2.tar.gz -C /usr/local/
# cd /usr/local/
# mv elasticsearch-5.5.2 elasticsearch
2、创建一个用于运行elasticsearch的普通用户,随后修改elasticsearch家目录的所属权限为该用户;创建elasticsearch数据存储目录/data/elasticsearch
1
2
3
4
5
# groupadd elasticsearch
# useradd -g elasticsearch elasticsearch -m
# chown -R elasticsearch. /usr/local/elasticsearch
# mkdir /data/elasticsearch
# chown -R elasticsearch. /data/elasticsearch
2、修改elasticsearch.yml配置文件
1
2
3
4
5
6
# vim config/elasticsearch.yml
cluster.name: my-application #ELK集群名称
path.data: /data/elasticsearch #elasticsearch 数据存储目录
path.logs: /usr/local/elasticsearch/logs #elasticsearch 日志存储路径
network.host: 10.17.83.42 #elasticsearch 监听地址,默认为localhost
http.port: 9200 #elasticsearch 监听端口,默认问9200
6、修改相关的内核参数
# 打开/etc/security/limits.conf文件,添加以下参数
1
2
3
4
* soft nproc 2048
* hard nproc 16384
* soft nofile 65536
* hard nofile 65536
# 修改vm.max_map_count=262144
1
# echo "/etc/sysctl.conf" >> /etc/sysctl.conf
7、运行elasticsearch (注意:要切换到普通用户运行)
# su - elasticsearch
$ ./bin/elasticsearch
运行elasticsearch
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
$ cd /usr/local/elasticsearch/
$ ./bin/elasticsearch
[] initializing ...
using data paths, mounts [[/ (rootfs)]], net usable_space , net total_space , spins? , types
heap size , compressed ordinary object pointers
node name derived from node ID ; set to override
version, pid, build, OS, JVM
JVM arguments [-Xms2g, -Xmx2g, -XX:+UseConcMarkSweepGC, -XX:CMSInitiatingOccupancyFraction=75, -XX:+UseCMSInitiatingOccupancyOnly, -XX:+AlwaysPreTouch, -Xss1m, -Djava.awt.headless=true, -Dfile.encoding=UTF-8, -Djna.nosys=true, -Djdk.io.permissionsUseCanonicalPath=true, -Dio.netty.noUnsafe=true, -Dio.netty.noKeySetOptimization=true, -Dio.netty.recycler.maxCapacityPerThread=0, -Dlog4j.shutdownHookEnabled=false, -Dlog4j2.disable.jmx=true, -Dlog4j.skipJansi=true, -XX:+HeapDumpOnOutOfMemoryError, -Des.path.home=/usr/local/elasticsearch]
loaded module
loaded module
loaded module
loaded module
loaded module
loaded module
loaded module
loaded module
loaded module
loaded module
loaded module
no plugins loaded
using discovery type
initialized
starting ...
publish_address {10.17.83.42:9300}, bound_addresses {10.17.83.42:9300}
bound or publishing to a non-loopback or non-link-local address, enforcing bootstrap checks
new_master {6eN59Pf}{6eN59PfuS7iVRfoEppsngg}{hs82h4vkTwKCEvKybCodbw}{10.17.83.42}{10.17.83.42:9300}, reason: zen-disco-elected-as-master ( nodes joined)
publish_address {10.17.83.42:9200}, bound_addresses {10.17.83.42:9200}
started
recovered indices into cluster_state
[.kibana] creating index, cause , templates [], shards /, mappings
一般情况我们要求elasticsearch在后台运行,使用命令如下:
1
$ ./bin/elasticsearch -d
8、检查elasticsearch状态,如下则表示正常运行
1
2
3
4
5
6
7
8
9
10
11
12
13
14
# curl http://10.17.83.42:9200
{
"name" : "6eN59Pf",
"cluster_name" : "my-application",
"cluster_uuid" : "cKopwE1iTciIQAiFI6d8Gw",
"version" : {
"number" : "5.5.2",
"build_hash" : "b2f0c09",
"build_date" : "2017-08-14T12:33:14.154Z",
"build_snapshot" : false,
"lucene_version" : "6.6.0"
},
"tagline" : "You Know, for Search"
}
二、Logstash安装
1、下载logstash软件包
1
# wget https://artifacts.elastic.co/downloads/logstash/logstash-5.5.2.tar.gz
2、解压至指定安装目录
1
2
3
4
# tar -zxf logstash-5.5.2.tar.gz -C /usr/local/
# cd /usr/local/
# mv logstash-5.5.2 logstash
3、运行logstash
# cd logstash/
# ./bin/logstash -e 'input { stdin { } } output { stdout {} }'
输入”hello world!”,验证是否正常输出
1
2
3
4
5
6
7
8
9
10
11
12
# ./bin/logstash -e 'input { stdin { } } output { stdout {} }'
ERROR StatusLogger No log4j2 configuration file found. Using default configuration: logging only errors to the console.
Sending Logstash's logs to /usr/local/logstash/logs which is now configured via log4j2.properties
Creating directory {:setting=>"path.queue", :path=>"/usr/local/logstash/data/queue"}
Creating directory {:setting=>"path.dead_letter_queue", :path=>"/usr/local/logstash/data/dead_letter_queue"}
No persistent UUID file found. Generating new UUID {:uuid=>"2fb479ab-0ca5-4979-89b1-4246df9a7472", :path=>"/usr/local/logstash/data/uuid"}
Starting pipeline {"id"=>"main", "pipeline.workers"=>8, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>5, "pipeline.max_inflight"=>1000}
Pipeline main started
The stdin plugin is now waiting for input:
Successfully started Logstash API endpoint {:port=>9600}
hello world!
2017-08-28T07:11:42.724Z elk-server.huangming.org hello world!
三、Kibana安装
1、下载kibana
1
# wget https://artifacts.elastic.co/downloads/kibana/kibana-5.5.2-linux-x86_64.tar.gz
2、解压至安装目录
1
2
3
# tar -zxf kibana-5.5.2-linux-x86_64.tar.gz -C /usr/local/
# cd /usr/local/
# mv kibana-5.5.2-linux-x86_64 kibana
3、修改配置
1
2
3
4
5
6
7
# cd kibana/
# vim config/kibana.yml
server.port: 5601 # 监听端口
server.host: "10.17.83.42" # 指定后端服务器
elasticsearch.url: "http://10.17.83.42:9200"# 指定elasticsearch实例地址
4、运行kibana
# ./bin/kibana &
1
2
3
4
5
6
7
8
9
10
# ./bin/kibana &
3219
# log Status changed from uninitialized to green - Ready
log Status changed from uninitialized to yellow - Waiting for Elasticsearch
log Status changed from uninitialized to green - Ready
log Status changed from uninitialized to green - Ready
log Status changed from yellow to green - Kibana index ready
log Status changed from uninitialized to green - Ready
log Server running at http://10.17.83.42:5601
log Status changed from uninitialized to green - Ready
5、验证kibana
在客户端浏览器打开http://10.17.83.42:5601
在该页面提示我们需要创建一个index 首先创建一个kinana默认的index(名称为.kibana),如果输入的index名不存在,则无法创建
查看运行状态及已安装的插件
至此ELK已经搭建完成了,下面来创建一个收集message系统日志的实例
四、收集syslog日志
1、创建配置文件
# cd logstash/# vim config/logstash.conf
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
input {
file {
path => ["/var/log/messages"]
type => "syslog"
}
}
filter {
grok {
match => [ "message", "%{SYSLOGBASE} %{GREEDYDATA:content}" ]
}
}
output {
elasticsearch {
hosts => ["10.17.83.42:9200"]
index => "syslog-%{+YYY.MM.dd}"
}
stdout { codec => rubydebug }
}
其中match => [ "message", "%{SYSLOGBASE} %{GREEDYDATA:content}" ]这一行也可以具体写成如下: match => [ "message", "%{SYSLOGTIMESTAMP:timestamp} (?:%{SYSLOGFACILITY} )?%{SYSLOGHOST:logsource} %{SYSLOGPROG}: %{GREEDYDATA:content}" ]
参考官方文档logstash配置:
https://www.elastic.co/guide/en/logstash/current/plugins-filters-grok.html
2、指定配置文件运行logstash
# ./bin/logstash -f ./config/logstash.conf &
1
2
3
4
5
6
7
8
9
10
11
12
13
# ./bin/logstash -f ./config/logstash.conf &
4479
# ERROR StatusLogger No log4j2 configuration file found. Using default configuration: logging only errors to the console.
Sending Logstash's logs to /usr/local/logstash/logs which is now configured via log4j2.properties
Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://10.17.83.42:9200/]}}
Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>http://10.17.83.42:9200/, :path=>"/"}
Restored connection to ES instance {:url=>"http://10.17.83.42:9200/"}
Using mapping template from {:path=>nil}
Attempting to install template {:manage_template=>{"template"=>"logstash-*", "version"=>50001, "settings"=>{"index.refresh_interval"=>"5s"}, "mappings"=>{"_default_"=>{"_all"=>{"enabled"=>true, "norms"=>false}, "dynamic_templates"=>[{"message_field"=>{"path_match"=>"message", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false}}}, {"string_fields"=>{"match"=>"*", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false, "fields"=>{"keyword"=>{"type"=>"keyword", "ignore_above"=>256}}}}}], "properties"=>{"@timestamp"=>{"type"=>"date", "include_in_all"=>false}, "@version"=>{"type"=>"keyword", "include_in_all"=>false}, "geoip"=>{"dynamic"=>true, "properties"=>{"ip"=>{"type"=>"ip"}, "location"=>{"type"=>"geo_point"}, "latitude"=>{"type"=>"half_float"}, "longitude"=>{"type"=>"half_float"}}}}}}}}
New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["//10.17.83.42:9200"]}
Starting pipeline {"id"=>"main", "pipeline.workers"=>8, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>5, "pipeline.max_inflight"=>1000}
Pipeline main started
Successfully started Logstash API endpoint {:port=>9600}
3、登录kibana页面 点击Management --> Index Patterns --> Create index
在Index name or pattern处输入在logstash中指定的index,后面的日期直接用*号代替即可
创建完成之后进入Discover页面
4、验证是否正常收集syslog日志,执行以下命令手动生成日志
1
2
# logger "helloooooooo22"
# yum install httpd
查看源日志文件,最近生成的的日志
1
2
3
4
5
6
7
8
9
10
11
# tail -f -n 10 /var/log/messages
Aug 28 16:46:15 elk-server root: helloooooooo33
Aug 28 16:47:17 elk-server yum: Installed: apr-1.4.8-3.el7.x86_64
Aug 28 16:47:17 elk-server yum: Installed: apr-util-1.5.2-6.el7.x86_64
Aug 28 16:47:18 elk-server yum: Installed: httpd-tools-2.4.6-45.el7.centos.4.x86_64
Aug 28 16:47:18 elk-server yum: Installed: mailcap-2.1.41-2.el7.noarch
Aug 28 16:47:19 elk-server systemd: Reloading.
Aug 28 16:47:19 elk-server systemd: Configuration file /usr/lib/systemd/system/auditd.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
Aug 28 16:47:19 elk-server systemd: Configuration file /usr/lib/systemd/system/ebtables.service is marked executable. Please remove executable permission bits. Proceeding anyway.
Aug 28 16:47:19 elk-server systemd: Configuration file /usr/lib/systemd/system/wpa_supplicant.service is marked executable. Please remove executable permission bits. Proceeding anyway.
Aug 28 16:47:19 elk-server yum: Installed: httpd-2.4.6-45.el7.centos.4.x86_64
5、回到Kibana管理页面,重新刷新页面,可以看到新的日志已经展示出来了
展开最近的一条日志
页:
[1]