如何临时解决Elasticsearch最大分片数量限制问题?
- By : Will
- Category : Elastic Stack

Elastic Stack
1 前言
一个问题,一篇文章,一出故事。
笔者今天遇到Elasticsearch最大分片数量限制问题,核心提示是,
Validation Failed: 1: this action would add [2] shards, but this cluster currently has [5000]/[5000] maximum normal shards open;
详细提示是,
Apr 28 06:30:21 azlogstash01 logstash[1104]: [2025-04-28T06:30:21,755][WARN ][logstash.outputs.elasticsearch][main][dd88e8fe91317e1a1b36ff37e3e8b314904bb44ab2189d63c63255c646d00eb3] Could not index event to Elasticsearch. {:status=>400, :action=>["index", {:_id=>nil, :_index=>"hk-network-2025.04.27", :routing=>nil}, {"tags"=>["_grokparsefailure_sysloginput", "hk-network", "hk-network"], "event"=>{"original"=>"Apr 28 2025 06:30:21: %ASA-5-746014: user-identity: [FQDN] smtp.office365.com address 40.97.9.9 obsolete\n"}, "log"=>{"syslog"=>{"priority"=>0, "facility"=>{"name"=>"kernel", "code"=>0}, "severity"=>{"name"=>"Emergency", "code"=>0}}}, "message"=>"Apr 28 2025 06:30:21: %ASA-5-746014: user-identity: [FQDN] smtp.office365.com address 40.97.9.9 obsolete\n", "@timestamp"=>2025-04-27T22:30:21.699268212Z, "service"=>{"type"=>"system"}, "host"=>{"ip"=>"10.168.0.37"}, "type"=>"514", "@version"=>"1"}], :response=>{"index"=>{"status"=>400, "error"=>{"type"=>"illegal_argument_exception", "reason"=>"Validation Failed: 1: this action would add [2] shards, but this cluster currently has [5000]/[5000] maximum normal shards open;"}}}}
2 最佳实践
2.1 Kibana查询当前单节点分片设置
GET _cluster/settings?include_defaults=true
然后你需要在返回的结果中,按下“Ctrl+F”搜索“max_shards_per_node”,可见如下设置,
#... "max_shards_per_node": "1000", #...
2.2 Kibana修改当前单节点分片设置
PUT _cluster/settings { "persistent": { "cluster.max_shards_per_node": 1500 } }
另外,如果需要恢复默认值,请执行如下命令,
PUT _cluster/settings { "persistent": { "cluster.max_shards_per_node": null } }
注意,以上是牺牲性能的临时解决方案,
– 你可能需要参考数据生命周期管理将旧索的分片设置为单个。
– 或者选择扩充Elasticsearch节点数量
没有评论