轩辕阁 发表于 2019-1-29 06:24:18

elasticsearch 写入优化

  全量dump数据时,为优化性能,可做如下优化。


[*]分片设置,不分片

http://localhost:9200/test_index/_settings/
{
"index": {
"number_of_replicas": 0
}
}

[*]刷新设置,不刷新

http://localhost:9200/test_index/_settings/
{
"index": {
"refresh_interval": "-1"
}
}

[*]translog 大小设置,调大 默认512M

http://localhost:9200/test_index/_settings/
{
"index.translog.flush_threshold_size": "1024mb"
}

[*]如果是SSD硬盘,修改段合并速率

http://localhost:9200/_cluster/settings/
{
"persistent": {
"indices.store.throttle.max_bytes_per_sec": "200mb"
}
}
http://localhost:9200/_cluster/settings/
{
"transient": {
"indices.store.throttle.type": "none"
}
}
  全量dump后,再恢复一下


[*]分片设置

http://localhost:9200/test_index/_settings/
{
"index": {
"number_of_replicas": 2
}
}

[*]刷新设置,不刷新

http://localhost:9200/test_index/_settings/
{
"index": {
"refresh_interval": "1s"
}
}

[*]translog 大小设置

http://localhost:9200/test_index/_settings/
{
"index.translog.flush_threshold_size": "512mb"
}
  当然应该使用bulk模式, elasticsearch单次提交数据尽量在15M以内。



页: [1]
查看完整版本: elasticsearch 写入优化