|
ELK部署完成后,出现了一些问题,需要调整及优化。
1.elasticsearch 调节堆内存大小:
Elasticsearch默认设置的内存是1GB,对于业务较小,所以需要将机器一半内存分配给jvm
查看系统内存:
# free -m
total used free shared buffers cached
Mem: 24028 20449 3579 0 185 8151
-/+ buffers/cache: 12112 11916
Swap: 0 0 0
系统为24G计划将12G分配给jvm
修改方式
(1).修改ES_HEAP_SIZE环境变量
vim /etc/profile
export ES_HEAP_SIZE=12G
source /etc/profile
(2).启动指定分配内存
./bin/elasticsearch -Xmx12G -Xms12G
重启查看配置是否生效:
/etc/init.d/elasticsearch restart 查看:
2.配置文件修改
vim elasticsearch.yml
ootstrap.mlockall: true 设置为true来锁住内存。因为当jvm开始swapping时es的效率会降低,所以要保证它不swap,可以把ES_MIN_MEM和ES_MAX_MEM两个环境变量设置成同一个值,并且保证机器有足够的内存分配给es。 同时也要允许elasticsearch的进程可以锁住内存.
3.错误:[[FIELDDATA] Data too large, data for [proccessDate]
资料说明如下:
fielddata的大小是在数据被加载之后才校验的。假如下一个查询准备加载进来的fieldData让缓存区超过可用堆大小会发生什么?很遗憾的是,它将产生一个OOM异常。
断路器就是用来控制cache加载的,它预估当前查询申请使用内存的量,并加以限制。
# curl -XGET http://localhost:9200/_cluster/settings?pretty
{
"persistent" : {
"indices" : {
"breaker" : {
"fielddata" : {
"limit" : "40%"
}
},
"store" : {
"throttle" : {
"max_bytes_per_sec" : "100mb"
}
}
}
},
"transient" : {
"cluster" : {
"routing" : {
"allocation" : {
"enable" : "all"
}
}
}
}
} 说明:fielddata 断路器限制fielddata的大小,默认情况下为堆大小的60%。这里设置为40%
4.修改logstash 的refresh intervel
默认设置的5秒,根据业务判断太频繁,所以希望改小:
# curl -XGET http://localhost:9200/_template/logstash?pretty
{
"logstash" : {
"order" : 0,
"template" : "logstash-*",
"settings" : {
"index" : {
"refresh_interval" : "5s"
}
},
..... 修改为20秒:
cd /opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-1.0.7-java/lib/logstash/outputs/elasticsearch
vim elasticsearch-template.json
{
"template" : "logstash-*",
"settings" : {
"index.refresh_interval" : "20s"
},
.... 重启服务
/etc/init.d/logstash restart 5.设置代理及认证
使用nginx 反向代理kibana及认证
#vim kibana.conf
server {
listen 80;
server_name ckl.kibana.com;
error_log /data/log/kibana_error.log;
proxy_headers_hash_max_size 5120;
proxy_headers_hash_bucket_size 640;
location / {
proxy_pass http://0.0.0.0:5601;
auth_basic "Restricted";
auth_basic_user_file /usr/local/nginx/conf/ssl/site_pass;
proxy_set_header Connection "";
proxy_set_header Host $host;
proxy_set_header X-Real-IP $http_x_forwarded_for;
proxy_set_header X-Forwarded-Server $host;
proxy_set_header X-Forwarded-For $http_x_forwarded_for;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}cat /usr/local/nginx/conf/ssl/site_pass
ckl:1TFEeTtLwlF7ZgMG
认证密码为htpasswd生成的密码:
如果别人知道你的5601直接访问怎么办,最简单,做个端口转发吧
iptables -t nat -A PREROUTING -p tcp --dport 5601 -j REDIRECT --to-port 80
其实更好的是:
vim /usr/local/kibana/config/kibana.yml
server.host: "127.0.0.1" 配置为127.0.0.1
修改nginx配置文件
server {
listen 80;
server_name elk.ckl.com;
location /
{
proxy_pass http://127.0.0.1:5601;
auth_basic "Restricted";
auth_basic_user_file /app/local/nginx/conf/ssl/site_pass;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $host;
}
location /bundles {
proxy_pass http://127.0.0.1:5601/bundles;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $host;
}
location /plugins {
proxy_pass http://127.0.0.1:5601/plugins;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $host;
}
location /app/ {
proxy_pass http://127.0.0.1:5601/app/;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $host;
}
} 很奇怪,直接ip访问可以,而且可以验证,域名就不行,虽然我已经绑定了域名
|
|
|