设为首页 收藏本站
查看: 852|回复: 0

[经验分享] Recovering unassigned shards on elasticsearch 2.x——副本shard可以设置replica为0在设置回来

[复制链接]

尚未签到

发表于 2017-11-22 22:37:48 | 显示全部楼层 |阅读模式
Recovering unassigned shards on elasticsearch 2.x
  摘自:https://z0z0.me/recovering-unassigned-shards-on-elasticsearch/
  I got accross the problem when decided to add a node to the elasticsearch cluster and that node was not able to replicate the indexes of the cluster. This issue is usually happens when there is not enough disk space available, or not available master or different elasticsearch version. While my servers had more than enough disk space and also the master was available with the help of the elasticsearch discuss I found out that the new node was having a different version than old nodes. Basically while installing on Debian jessie I just run apt-get install elasticsearch which ended up installing the latest available version. To install a specific version of the elasticsearch you prety much need to add ={version}.

#apt-get install elasticsearch={version}

  Now that I have identified the reasons for unallocated shards and successfully downgraded the elasticsearch to the required version by running the command above after starting the node the cluster was still in red state with unassigned shards all over the place:

#curl http://localhost:9200/_cluster/health?pretty
{
"cluster_name" : "z0z0",
"status" : "red",
"timed_out" : false,
"number_of_nodes" : 3,
"number_of_data_nodes" : 3,
"active_primary_shards" : 6,
"active_shards" : 12,
"relocating_shards" : 0,
"initializing_shards" : 0,
"unassigned_shards" : 8,
"delayed_unassigned_shards" : 0,
"number_of_pending_tasks" : 0,
"number_of_in_flight_fetch" : 0,
"task_max_waiting_in_queue_millis" : 0,
"active_shards_percent_as_number" : 60.0
}
#curl http://localhost:9200/_cat/shards
site-id      4 p UNASSIGNED                                                
site-id      4 r UNASSIGNED                                                
site-id      1 p UNASSIGNED                                                
site-id      1 r UNASSIGNED                                                
site-id      3 p STARTED    0 159b 10.0.0.6 node-2
site-id      3 r STARTED    0 159b 10.0.0.7 node-3
site-id      2 r STARTED    0 159b 10.0.0.6 node-2
site-id      2 p STARTED    0 159b 10.0.0.7 node-3
site-id      0 r STARTED    0 159b 10.0.0.6 node-2
site-id      0 p STARTED    0 159b 10.0.0.7 node-3
subscription 4 p UNASSIGNED                                                
subscription 4 r UNASSIGNED                                                
subscription 1 p UNASSIGNED                                                
subscription 1 r UNASSIGNED                                                
subscription 3 p STARTED    0 159b 10.0.0.6 node-2
subscription 3 r STARTED    0 159b 10.0.0.7 node-3
subscription 2 r STARTED    0 159b 10.0.0.6 node-2
subscription 2 p STARTED    0 159b 10.0.0.7 node-3
subscription 0 p STARTED    0 159b 10.0.0.6 node-2
subscription 0 r STARTED    0 159b 10.0.0.7 node-3

  At this point I was pretty desperate and whatever I tried it either did not do anything or ended up in all kind of failures. So I set the number_of_replicas to 0 by running the following query:

#curl -XPUT http://localhost:9200/_settings?pretty -d '
{
"index" : {
"number_of_replicas' : 0
}
}'

  and started to stop the nodes one by one until I was having only one live node.
At this point I decided to start trying to reroute the unassigned shards and if it won't work I would just start over my cluster. So I did run the following:

#curl -XPOST -d '
{
"commands" : [ {
"allocate" : {
"index" : "site-id",
"shard" : 1,
"node" : "node-3",
"allow_primary" : true
}
} ]
}' http://localhost:9200/_cluster/reroute?pretty

  I've seen that the rerouted shard became initialized then running so I did the same command on the rest of unassigned shards.
Running curl http://localhost:9200/_cluster/health?pretty confirmed that I am on the good track to fix the cluster.

#curl http://localhost:9200/_cluster/health?pretty
{
"cluster_name" : "z0z0",
"status" : "green",
"timed_out" : false,
"number_of_nodes" : 1,
"number_of_data_nodes" : 1,
"active_primary_shards" : 10,
"active_shards" : 20,
"relocating_shards" : 0,
"initializing_shards" : 0,
"unassigned_shards" : 0,
"delayed_unassigned_shards" : 0,
"number_of_pending_tasks" : 0,
"number_of_in_flight_fetch" : 0,
"task_max_waiting_in_queue_millis" : 0,
"active_shards_percent_as_number" : 100.0
}

  So the cluster was green again but was running out of one node. So it was time to bring up the other nodes one by one. When all the nodes were up I set the number_of_replicas to 1 by running the following:

#curl -XPUT http://localhost:9200/_settings -d '
{
"index" : {
"number_of_replicas" : 1
}
}'

  So my elasticsearch cluster is back on running 3 nodes and still in green state. After alot of googling and wasted time I decided to write this article so that if anyone would come accross this issue would have a working example of how to fix it.

运维网声明 1、欢迎大家加入本站运维交流群:群②:261659950 群⑤:202807635 群⑦870801961 群⑧679858003
2、本站所有主题由该帖子作者发表,该帖子作者与运维网享有帖子相关版权
3、所有作品的著作权均归原作者享有,请您和我们一样尊重他人的著作权等合法权益。如果您对作品感到满意,请购买正版
4、禁止制作、复制、发布和传播具有反动、淫秽、色情、暴力、凶杀等内容的信息,一经发现立即删除。若您因此触犯法律,一切后果自负,我们对此不承担任何责任
5、所有资源均系网友上传或者通过网络收集,我们仅提供一个展示、介绍、观摩学习的平台,我们不对其内容的准确性、可靠性、正当性、安全性、合法性等负责,亦不承担任何法律责任
6、所有作品仅供您个人学习、研究或欣赏,不得用于商业或者其他用途,否则,一切后果均由您自己承担,我们对此不承担任何法律责任
7、如涉及侵犯版权等问题,请您及时通知我们,我们将立即采取措施予以解决
8、联系人Email:admin@iyunv.com 网址:www.yunweiku.com

所有资源均系网友上传或者通过网络收集,我们仅提供一个展示、介绍、观摩学习的平台,我们不对其承担任何法律责任,如涉及侵犯版权等问题,请您及时通知我们,我们将立即处理,联系人Email:kefu@iyunv.com,QQ:1061981298 本贴地址:https://www.yunweiku.com/thread-409676-1-1.html 上篇帖子: PILLAR WALKTHROUGH(实践) 下篇帖子: 22.Hbase安装
您需要登录后才可以回帖 登录 | 立即注册

本版积分规则

扫码加入运维网微信交流群X

扫码加入运维网微信交流群

扫描二维码加入运维网微信交流群,最新一手资源尽在官方微信交流群!快快加入我们吧...

扫描微信二维码查看详情

客服E-mail:kefu@iyunv.com 客服QQ:1061981298


QQ群⑦:运维网交流群⑦ QQ群⑧:运维网交流群⑧ k8s群:运维网kubernetes交流群


提醒:禁止发布任何违反国家法律、法规的言论与图片等内容;本站内容均来自个人观点与网络等信息,非本站认同之观点.


本站大部分资源是网友从网上搜集分享而来,其版权均归原作者及其网站所有,我们尊重他人的合法权益,如有内容侵犯您的合法权益,请及时与我们联系进行核实删除!



合作伙伴: 青云cloud

快速回复 返回顶部 返回列表