2312321 发表于 2016-5-16 09:28:06

确定elk中的数据存储的位置-和增加集群节点

可见由 配置文件path.data决定

# cat /usr/local/elasticsearch/config/elasticsearch.yml| egrep -v "^$|^#"
path.data: /tmp/elasticsearch/data
path.logs: /tmp/elasticsearch/logs
network.host: 192.168.100.10
network.port: 9200


# du -s /tmp/elasticsearch/data/
4384    /tmp/elasticsearch/data/
# du -s /tmp/elasticsearch/data/
8716    /tmp/elasticsearch/data/

如果是rpm安装的elasticsearch(简称)可以/etc/init.d/elasticsearch中设置: data 和log的位置

1
2
3
4
5
6
7
8
9
35 ES_USER="elasticsearch"
36 ES_GROUP="elasticsearch"
37 ES_HOME="/usr/share/elasticsearch"
38 MAX_OPEN_FILES=65535
39 MAX_MAP_COUNT=262144
40 LOG_DIR="/data2/elk/elasticsearch/log/elasticsearch"
41 DATA_DIR="/data2/elk/elasticsearch/data/elasticsearch"
42 CONF_DIR="/etc/elasticsearch"
43





配置集群:
没有集群的时候,默认的数据目录结构是:

# ls /tmp/elasticsearch/data/elasticsearch/   
nodes

【配置集群的前提 hosts 解析】

master节点和slave节点要互相能够解析
# ping master
PING www.elk.com (192.168.100.10) 56(84) bytes of data.
64 bytes from www.elk.com (192.168.100.10): icmp_seq=1 ttl=64 time=0.073 ms
^C
--- www.elk.com ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 675ms
rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms
# ping slave
PING slave (192.168.100.13) 56(84) bytes of data.
64 bytes from slave (192.168.100.13): icmp_seq=1 ttl=64 time=1.18 ms

【配置集群】
master节点(节点一) 192.168.100.10上的配置:
# ---------------------------------- Cluster -----------------------------------
cluster.name: elasticsearch-cluster

# ------------------------------------ Node ------------------------------------

node.name: master

# --------------------------------- Discovery ----------------------------------
#
# Elasticsearch nodes will find each other via unicast, by default.
#
# Pass an initial list of hosts to perform discovery when new node is started:
# The default list of hosts is ["127.0.0.1", "[::1]"]
#
# discovery.zen.ping.unicast.hosts: ["host1", "host2"]
discovery.zen.ping.unicast.hosts: ["master", "slave"]




salve节点(节点二) 192.168.100.13上的配置:安装elasticsearch (同上elasticsearch安装)

# egrep -v "^$|^#" config/elasticsearch.yml   

1
2
3
4
5
6
7
cluster.name: elasticsearch-cluster
node.name: slave
discovery.zen.ping.unicast.hosts: ["master", "slave"]
path.data: /tmp/elasticsearch/data
path.logs: /tmp/elasticsearch/logs
network.host: 0.0.0.0
network.port: 9200






slave启动elasticsearch

1
# sudo su - elasticsearch /usr/local/elasticsearch/bin/elasticsearch






【验证】
访问插件查看集群状态

http://192.168.100.10:9200/_plugin/head/




或者使用:
API检查

1
# curl 192.168.100.10:9200/_cluster/health?pretty





1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
{
"cluster_name" : "elasticsearch",
"status" : "yellow",
"timed_out" : false,
"number_of_nodes" : 1,
"number_of_data_nodes" : 1,
"active_primary_shards" : 26,
"active_shards" : 26,
"relocating_shards" : 0,
"initializing_shards" : 0,
"unassigned_shards" : 26,
"delayed_unassigned_shards" : 0,
"number_of_pending_tasks" : 0,
"number_of_in_flight_fetch" : 0,
"task_max_waiting_in_queue_millis" : 0,
"active_shards_percent_as_number" : 50.0
}






状态意义
green所有主分片和从分片都可用
yellow所有主分片可用,但存在不可用的从分片
red存在不可用的主要分片
在接下来的章节,我们将学习一下什么是主要分片(primary shard) 和 从分片(replica shard),并说明这些状态在实际环境中的意义。



【为黄不为绿色的原因】
我主节点配置文件修改后,木有重启,重启后即正常


【结果】




之后数据的存储路径会变成 elasticsearch-cluster 目录下面
/tmp/elasticsearch/data/elasticsearch-cluster


当然之前一个节点的数据当然是没有了,所以重新打开kibana会要新创建索引

数据目录中也只能看到今天的索引了
# ls /tmp/elasticsearch/data/elasticsearch-cluster/nodes/0/indices/
.kibana/             logstash-2016.05.15/






【看数据增加】
为了给它加点数据

1
# for i in {1..100000}; do echo "mesaage $i" >> /var/log/messages ;done





看结果:









页: [1]
查看完整版本: 确定elk中的数据存储的位置-和增加集群节点