blueice 发表于 2019-1-29 08:10:10

elasticsearch FORBIDDEN/12/index read-only / allow

  Eleastisearch6.0.0由单节点升级到多节点集群cluster时候出现的分片同步错误问题解决
  原创 2018年01月18日 16:33:21 5
  启动多个节点的ES后,ES开始推举master节点并同步分片shard数据到新ES节点上,此时观察Logstash日志抛出以下错误:
  logstash.outputs.elasticsearch] retrying failed action with response code: 403 ({"type"=>"cluster_block_exception", "reason"=>"blocked by:
  这是由于ES新节点的数据目录data存储空间不足,导致从master主节点接收同步数据的时候失败,此时ES集群为了保护数据,会自动把索引分片index置为只读read-only
  解决步骤:
  1.提供足够的存储空间供数据写入,如需在配置文件中更改ES数据存储目录,注意重启ES
  2.放开索引只读设置,在Kibana的开发工具Dev Tools中执行(或在服务器上通过curl工具发起PUT请求,下文同)
PUT _settings  
    {
  
    "index": {
  
    "blocks": {
  
    "read_only_allow_delete": "false"
  
    }
  
    }
  
    }
  此时观察ES集群状态:curl http://10.0.7.220:9200/_cluster/health?pretty
  注意到"active_shards_percent_as_number" : 12.0 该项的值产生变化;
- read only elasticsearch indices
  If your elasticsearch is responding with 403 and this message:
{  
"type": "cluster_block_exception",
  
"reason": "blocked by: ;"
  
}
  Then you probably recovered from a full hard drive. The thing is, elasticsearch is switching to read-only if it cannot index more documents because your hard drive is full. With this it ensures availability for read-only queries. Elasticsearch will not switch back automatically but you can disable it by sending
curl -XPUT -H "Content-Type: application/json" http://localhost:9200/_all/_settings -d '{"index.blocks.read_only_allow_delete": null}'

页: [1]
查看完整版本: elasticsearch FORBIDDEN/12/index read-only / allow