在经过了近半个月的ELK环境的搭建配置,我做了我个人的工作总结,和大家分享一下。
一、命令总结
1.1、 Es服务端口查看
# netstat -nlpt | grep -E"9200|9300"
1.2、 Es插件安装和移除
# ./bin/plugin install file:///home/apps/license-2.3.3.zip
# ./bin/plugin install file:///home/apps/marvel-agent-2.3.3.zip
es移除插件
/usr/local/elasticsearch # ./bin/pluginremove plugins/marvel-agent
/usr/local/elasticsearch # ./bin/pluginremove plugins/license
1.3、 Kibana插件安装和移除
/usr/local/kibana # ./bin/kibana plugin--install marvel --url file:///home/apps/marvel-2.3.3.tar.gz
移除插件
./bin/kibana plugin --remove marvel
1.4、 logstash检查文件是否有错误
/usr/local/logstash/etc # ../bin/logstash-f logstash.conf --configtest --verbose
(显示“ Configuration OK”说明没有问题)
1.5、启动 logstash
/usr/local/logstash/etc # ../bin/logstash-f logstash.conf
1.6、验证 es服务
#curl -XGET ES-NODE-IP:9200
1.7、查看 es集群状态
# curl -XGET ES-NODE-IP:9200/_cluster/health?pretty=true
1.8、添加和删除 es索引, nginx-logs为索引名
curl -XPUT http://ES-NODE-IP:9200/nginx-logs/
删除 es索引
curl -XDELETE http://ES-NODE-IP:9200/nginx-logs/
1.9验证 head插件
http://ES-NODE-IP:9200/_plugin/head/
1.10、创建 elasticsearch账户
# groupadd esuser
# useradd -d /home/esuser -m esuser
# passwd esuser
1.11查看 kafka服务启动状态
# jps
9536 Main
15200 Jps
14647 Kafka
8760 Elasticsearch
21177 -- process information unavailable
14316 QuorumPeerMain
5791 QuorumPeerMain
1. 12、查看 kafka端口状态
# netstat -nlpt | grep -E"2181|3888"
tcp 0 0 :::2181 :::* LISTEN 14316/java
tcp 0 0 192.168.1.105:3888 :::* LISTEN 5791/java
1.13、创建、删除、列出主题( topic)
创建主题( topic)
/usr/local/kafka # bin/kafka-topics.sh--create --zookeeper localhost:2181 --replication-factor 1 --partitions 1--topic test
Created topic "test".
删除主题( topic)
/usr/local/kafka # ./bin/kafka-topics.sh--delete --zookeeper 182.180.50.211:2181 --topic nginx-messages
列出所有的 topic
/usr/local/kafka # bin/kafka-topics.sh--list --zookeeper localhost:2181
Test
显示 topic的详细信息
/usr/local/kafka # bin/kafka-topics.sh --describe--zookeeper localhost:218
1.14、 kafka创建生产者和消费者
创建生产者
/usr/local/kafka #bin/kafka-console-producer.sh --broker-list localhost:9092 --topic test
This is a message
创建消费者
/usr/local/kafka # bin/kafka-console-consumer.sh --zookeeperlocalhost:2181 --topic test --from-beginning
This is a message
如果向上边的可以收到来自生产者的消息,就说明基于 kafka的 zookeeper环境配置好了。
1.15、测试日志文件的数据传输
# cp /var/log/messages /home
# >/var/log/messages
# echo "hello kibana aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa">> /var/log/messages
1.16、重新加载 nginx配置文件
# ./nginx -s reload
二、 配置汇总
2.1、 Java环境配置
# vi /etc/profile
export PATH=$PATH:/soft_ins/mysql/bin
JAVA_HOME=/usr/local/jdk1.8.0_101
PATH=$JAVA_HOME/bin:$PATH
CLASSPATH=$JAVA_HOME/jre/lib/ext:$JAVA_HOME/lib/tools.jar
export PATH JAVA_HOME CLASSPATH
# source /etc/profile
# java -version
java version "1.8.0_101"
Java(TM) SE Runtime Environment (build1.8.0_101-b13)
Java HotSpot(TM) 64-Bit Server VM (build25.101-b13, mixed mode)
2.2、 es配置
# vi config/elasticsearch.yml
cluster.name: es_cluster
node.name: node3
path.data: /usr/local/elasticsearch/data
path.logs: /usr/local/elasticsearch/logs
network.host: 192.168.1.103
http.port: 9200
discovery.zen.ping.unicast.hosts:["192.168.1.103","192.168.1.104","192.168.1.105"]
说明:
discovery.zen.ping.unicast.hosts:["192.168.1.103","192.168.1.104","192.168.1.105"]是 es集群的说明,如果是单各 es节点的话就没必要配置这个。
2.3、 kibana配置
# vi /usr/local/kibana/config/kibana.yml
server.port: 5601
host: "192.168.1.103"
elasticsearch_url: http://192.168.1.103:9200, 192.168.1.104:9200,192.168.1.105:9200
说明:
elasticsearch_url: http://192.168.1.103:9200, 192.168.1.104:9200,192.168.1.105:9200 也是只想的 es集群,如果是单台则只需配置单个 es节点。
2.4、 kafka和 zookeeper非分离的配置
单个 kafka&zookeeper节点
/usr/local/kafka # vi config/zookeeper.properties
dataDir=/usr/local/kafka/tmp/zookeeper
/usr/local/kafka # viconfig/server.properties
log.dirs=/usr/local/kafka/tmp/kafka-logs
Zookeepe集群配置
# vi config/zookeeper.properties
dataDir=/usr/local/kafka/tmp/zookeeper
initLimit=5
syncLimit=2
server.2= 192.168.1.10 1:2888:3888
server.3= 192.168.1.10 2:2888:3888
server.4= 192.168.1.10 3:2888:3888
kafka集群:
/usr/local/kafka # viconfig/server.properties
broker.id=2
prot=9092
host.name= 192.168.1.10 1
log.dirs=/usr/local/kafka/tmp/kafka-logs
num.partitions=16
zookeeper.connect= 192.168.1.10 1:2181, 192.168.1.10 2:2181, 192.168.1.10 3:2181
注意其他两个节点 broker.id分别是 3和 4, ,host.name也按实际的配置
启动 zookeeper服务
/usr/local/kafka #./bin/zookeeper-server-start.sh config/zookeeper.properties
启动 kafka服务
/usr/local/kafka #./bin/kafka-server-start.sh config/server.properties
2.5、 kafka和 zookeeper分离的配置
Zookeeper的配置:
生成 zookeeper配置文件
# cd zookeeper/conf
# cp zoo_sample.cfg zoo.cfg
编辑配置文件
# vi zoo.cfg
dataDir=/usr/local/zookeeper/tmp/zookeeper
server.1= 192.168.1.10 1:2888:3888
server.2= 192.168.1.10 2:2888:3888
server.3= 192.168.1.10 3:2888:3888
# cd ..
# mkdir -p tmp/zookeeper
# echo "1" >tmp/zookeeper/myid
配置 node2和 node3的 zookeeper
依照 node1的配置配置 node2和 node3,注意下面的参数三个节点各有差异
Node2:
# echo "2" >tmp/zookeeper/myid
Node3:
# echo "3" >tmp/zookeeper/myid
其他配置都一样
依次启动三个节点的服务
# ./bin/zkServer.sh start conf/zoo.cfg
Kafka的配置:
配置 node1的 kafka
# cd ../kafka
# vi config/server.properties
broker.id=0
port=9092
host.name=x-shcs-creditcard-v01
log.dirs=/usr/local/kafka/tmp/kafka-logs
num.partitions=2
zookeeper.connect= 192.168.1.10 1:2181, 192.168.1.10 2:2181, 192.168.1.10 3:2181
配置 Node2和 node3的 kafka
依照 node1的配置配置 node2和 node3,注意下面的参数三个节点各有差异
Node2:
broker.id=1
host.name=node2
node3:
broker.id=:2
host.name=node3
说明:
host.name是节点的主机名
依次启动三个节点的 kafka
# ./bin/kafka-server-start config/server.properties
三、问题汇总
3.1、 marvel插件安装出错
node3:/usr/local/elasticsearch# ./bin/plugin install file:///home/apps/license-2.3.3.zip
-> Installing fromfile:/home/apps/license-2.3.3.zip...
Trying file:/home/apps/license-2.3.3.zip...
Downloading .DONE
Verifying file:/home/apps/license-2.3.3.zipchecksums if available ...
NOTE: Unable to verify checksum fordownloaded plugin (unable to find .sha1 or .md5 file to verify)
ERROR: Plugin [license] is incompatiblewith Elasticsearch [2.1.1]. Was designed for version [2.3.3]
插件版本和 ES版本不兼容,换为 2.3.3版本的插件就没有问题了。
3.2、启动es 服务出错
/usr/local/elasticsearch>./bin/elasticsearch
log4j:ERROR setFile(null,true) call failed.
java.io.FileNotFoundException:/usr/local/elasticsearch/logs/es_cluster.log (Permission denied)
文件权限的问题, es服务启动需要切换到非 root用户,而该用户没有访问 /usr/local/elasticsearch/logs/es_cluster.log的权限。
修改文件权限就好了
# chown -R esuser:esuser/usr/local/elasticsearch/data/
# chown -R esuser:esuser/usr/local/elasticsearch/logs
3.3、kibana版本不兼容
# ./bin/kibana
{"name":"Kibana","hostname":"node3","pid":20969,"level":50,"err":{"message":"unknownerror","name":"Error","stack":"Error:unknown error\n at respond(/home/apps/kibana-4.1.4-linux-x64/src/node_modules/elasticsearch/src/lib/transport.js:237:15)\n at checkRespForFailure(/home/apps/kibana-4.1.4-linux-x64/src/node_modules/elasticsearch/src/lib/transport.js:203:7)\n at HttpConnector.(/home/apps/kibana-4.1.4-linux-x64/src/node_modules/elasticsearch/src/lib/connectors/http.js:156:7)\n at IncomingMessage.bound(/home/apps/kibana-4.1.4-linux-x64/src/node_modules/elasticsearch/node_modules/lodash-node/modern/internals/baseBind.js:56:17)\n at IncomingMessage.emit(events.js:117:20)\n at_stream_readable.js:944:16\n atprocess._tickCallback (node.js:442:13)"},"msg":"","time":"2016-08-30T07:07:57.923Z","v":0}
换为更高的 kibana版本就没问题了
3.4、nginx 重载配置文件出错
/usr/local/nginx/sbin # ./nginx -s reload
nginx: [error] open()"/usr/local/nginx/logs/nginx.pid" failed (2: No such file ordirectory)
/usr/local/nginx/sbin # ls ../logs/
access.log error.log
/usr/local/nginx/sbin # ./nginx -c/usr/local/nginx/conf/nginx.conf
/usr/local/nginx/sbin # ls ../logs/
access.log error.log nginx.pid
3.5、启动kafka 出错有一个broker没起来,提示Failed to acquire lock on file .lock in/usr/local/kafka/tmp/kafka-logs.
关闭 kafka服务,查看还有哪个节点上的 kafka还有进程,杀死改进城之后再次启动就没问题了。
3.6、kafka 删除topic 出错,删除不了(还未解决)
/usr/local/kafka # ./bin/kafka-topics.sh--delete --zookeeper 192.168.1.101:2181 --topic nginx-messages
Topic nginx-messages is marked for deletion.
Note: This will have no impact ifdelete.topic.enable is not set to true
按网上说的修改
配置文件在 kafka\config目录
# vi server.properties
delete.topic.enable=true
但是修改之后还是删除不了 topic。
四、 Logstash配置文件
4.1、配置一
input {
stdin {
type => "stdin-type"
}
file {
type => "syslog-ng"
#Wildcards work, here :)
path => [ "/var/log/*.log", "/var/log/messages","/var/log/syslog" ]
}
}
output {
stdout { }
elasticsearch{
hosts=>["192.168.1.101:9200","192.168.1.102:9200","192.168.1.103:9200"]
}
配置一的架构是被收集日志的机器直接通过本机的logstash把日志传送给es集群
4.2、配置二
input {
file {
type => "system-message"
path => "/var/log/messages"
start_position => "beginning"
}
}
output {
#stdout { codec => rubydebug }
kafka {
bootstrap_servers =>"192.168.1.103:9092"
topic_id => "system-messages"
compression_type => "snappy"
}
}
配置二的架构是被收集日志的机器直接通过本机的logstash把日志传送给es单个节点
4.3、配置三
input {
kafka {
zk_connect =>"192.168.1.103:9092"
topic_id => "System-Messages"
codec => plain
reset_beginning => false
consumer_threads => 5
decorate_events => true
}
}
output {
elasticsearch {
hosts => "192.168.1.103:9092"
index =>"test-System-Messages-%{+YYYY-MM}"
}
}
配置三的架构是被kafka节点收集到日志之后传送给es节点
4.4、配置四
# vi/usr/local/logstash/etc/logstash_shipper.conf
input {
file {
type => "system-message"
path => "/var/log/messages"
start_position => "beginning"
}
}
output {
stdout { codec => rubydebug }
kafka {
bootstrap_servers =>"192.168.1.101:9092,192.168.1.102:9092,192.168.1.103:9092"
topic_id => "messages"
compression_type => "snappy"
}
logstash消费端
/usr/local/logstash # vietc/logstash_indexer.conf
input {
kafka {
zk_connect => "192.168.1.101:2181,192.168.1.102:2181,192.168.1.103:2181"
topic_id => "system-message"
codec => plain
reset_beginning => false
consumer_threads => 5
decorate_events => true
}
}
output {
elasticsearch {
hosts => "192.168.1.105:9200"
index =>"test-system-messages-%{+YYYY-MM}"
}
}
配置四的架构是被收集日志的机器直接通过本机的logstash把日志传送给kafka集群,之后kafka集群再把日志传送给es节点。
运维网声明
1、欢迎大家加入本站运维交流群:群②:261659950 群⑤:202807635 群⑦870801961 群⑧679858003
2、本站所有主题由该帖子作者发表,该帖子作者与运维网 享有帖子相关版权
3、所有作品的著作权均归原作者享有,请您和我们一样尊重他人的著作权等合法权益。如果您对作品感到满意,请购买正版
4、禁止制作、复制、发布和传播具有反动、淫秽、色情、暴力、凶杀等内容的信息,一经发现立即删除。若您因此触犯法律,一切后果自负,我们对此不承担任何责任
5、所有资源均系网友上传或者通过网络收集,我们仅提供一个展示、介绍、观摩学习的平台,我们不对其内容的准确性、可靠性、正当性、安全性、合法性等负责,亦不承担任何法律责任
6、所有作品仅供您个人学习、研究或欣赏,不得用于商业或者其他用途,否则,一切后果均由您自己承担,我们对此不承担任何法律责任
7、如涉及侵犯版权等问题,请您及时通知我们,我们将立即采取措施予以解决
8、联系人Email:admin@iyunv.com 网址:www.yunweiku.com