I stuck at this situation (logstash is not being shutdown). See the details below
Setup:-->
Elastic and logstash are hosted on same server. Filebeat is loaded on another server. Filebeat is sending the data to logstash and logstash is sending the data to elastic. Installed these as system service(Ubuntu).
Elastisearch Version --> 7.6.2
Logstash version --> 7.7.0
I have loaded 2 pipeline with logstash, checked the data is being sent to Elastic. Meanwhile I need to make a update in one of the pipeline, not able to stop(restart) the logstash.
As I checked, One of the pipeline have some issue and its data is not being forwarded to elastic. Now whenever I am about to shutdown/stop the logstash, it stucks/hang. (Don't know what to say).
Now what I have tried to shutdown/restart
systemctl stop/restart logstash
Ouput of logstash service status
systemctl status logstash
Output
● logstash.service - logstash
Loaded: loaded (/etc/systemd/system/logstash.service; disabled; vendor preset: enabled)
Active: deactivating (stop-sigterm) since Tue 2020-09-15 15:18:17 UTC; 19h ago
Main PID: 14298 (java)
Tasks: 40 (limit: 19141)
CGroup: /system.slice/logstash.service
└─14298 /usr/bin/java -Xms1g -Xmx1g -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly -Djava.awt.headless=true
Sep 15 15:30:26 Server logstash[14298]: [2020-09-15T15:30:26,238][INFO ][logstash.outputs.elasticsearch][pipeline1][8820ca4] retrying failed action with response code: 403 ({"type"=>"security_exception", "reason"=>"action [indices:admin/create] is unauthorized
Sep 15 15:30:26 Server logstash[14298]: [2020-09-15T15:30:26,239][INFO ][logstash.outputs.elasticsearch][pipeline1][88a920ca4] retrying failed action with response code: 403 ({"type"=>"security_exception", "reason"=>"action [indices:admin/create] is unauthorized
Sep 15 15:30:26 Server logstash[14298]: [2020-09-15T15:30:26,239][INFO ][logstash.outputs.elasticsearch][pipeline1][88a920ca4] retrying failed action with response code: 403 ({"type"=>"security_exception", "reason"=>"action [indices:admin/create] is unauthorized
Sep 15 15:30:26 Server logstash[14298]: [2020-09-15T15:30:26,240][INFO ][logstash.outputs.elasticsearch][pipeline1][88531a20ca4] Retrying individual bulk actions that failed or were rejected by the previous bulk request. {:count=>3}
Sep 15 15:30:30 Server logstash[14298]: [2020-09-15T15:30:30,894][WARN ][org.logstash.execution.ShutdownWatcherExt] {"inflight_count"=>0, "stalling_threads_info"=>{"other"=>[{"thread_id"=>43, "name"=>"[dem_logging_test]<beats", "current_call"=>"[...]/vendor/bundle/jruby/2.5.0/gems/logstash-input-beats-6.0.9-java/lib/logstash/inputs/beats.rb:197:in `run'"} ["LogStash::Filters::Mutate", {"remove_field"=>["agent", "host"] id"=>"df6bb998587313a3f737f399367d9ac0bbd9a962a64828c64ee0df7680f2f430"}]=>[{"thread_id"=>35, "name"=>"[dem_logging_test]>worker0", "current_call"=>"[...]/vendor/bundle/jruby/2.5.0/gems/stud-0.0.23/lib/stud/interval.rb:89:in `sleep'"} {"thread_id"=>37, "name"=>"[dem_logging_test]>worker1", "current_call"=>"...
Sep 15 15:30:35 Server logstash[14298]: [2020-09-15T15:30:35,911][WARN ][org.logstash.execution.ShutdownWatcherExt] {"inflight_count"=>0,
"stalling_threads_info"=>{"other"=>[{"thread_id"=>43, "name"=>"[dem_logging_test]<beats", "current_call"=>"[...]/vendor/bundle/jruby/2.5.0/gems/logstash-input-beats-6.0.9-java/lib/logstash/inputs/beats.rb:197:in `run'"} ["LogStash::Filters::Mutate", {"remove_field"=>["agent", "host"] id"=>"df6bb998587313a3f737f399367d9ac0bbd9a962a64828c64ee0df7680f2f430"}]=>[{"thread_id"=>35, "name"=>"[dem_logging_test]>worker0", "current_call"=>"[...]/vendor/bundle/jruby/2.5.0/gems/stud-0.0.23/lib/stud/interval.rb:89:in `sleep'"} {"thread_id"=>37, "name"=>"[dem_logging_test]>worker1", "current_call"=>"...
Other method tried:-->
pkill -9/-14 PID
but no luck.
I know there are some in-flight data which is preventing the logstash to shutdown. I have check this document from elastic but no help.
There is option mentioned allow.unsafe_shutdown and that option i haven't used.
Logstash logs:
[2020-09-15T15:30:40,928][WARN ][org.logstash.execution.ShutdownWatcherExt] {"inflight_count"=>0, "stalling_threads_info"=>{"other"=>[{"thread_id"=>43, "name"=>"[dem_logging_test]<beats", "current_call"=>"[...]/vendor/bundle/jruby/2.5.0/gems/logstash-input-beats-6.0.9-java/lib/logstash/inputs/beats.rb:197:in `run'"}], ["LogStash::Filters::Mutate", {"remove_field"=>["agent", "host"], "id"=>"df6bb998587313a3f737f399367d9ac0bbd9a962a64828c64ee0df7680f2f430"}]=>[{"thread_id"=>35, "name"=>"[dem_logging_test]>worker0", "current_call"=>"[...]/vendor/bundle/jruby/2.5.0/gems/stud-0.0.23/lib/stud/interval.rb:89:in `sleep'"}, {"thread_id"=>37, "name"=>"[dem_logging_test]>worker1", "current_call"=>"[...]/vendor/bundle/jruby/2.5.0/gems/stud-0.0.23/lib/stud/interval.rb:89:in `sleep'"}, {"thread_id"=>39, "name"=>"[dem_logging_test]>worker2", "current_call"=>"[...]/vendor/bundle/jruby/2.5.0/gems/stud-0.0.23/lib/stud/interval.rb:89:in `sleep'"}, {"thread_id"=>40, "name"=>"[dem_logging_test]>worker3", "current_call"=>"[...]/vendor/bundle/jruby/2.5.0/gems/stud-0.0.23/lib/stud/interval.rb:89:in `sleep'"}]}}
[2020-09-15T15:30:45,945][WARN ][org.logstash.execution.ShutdownWatcherExt] {"inflight_count"=>0, "stalling_threads_info"=>{"other"=>[{"thread_id"=>43, "name"=>"[dem_logging_test]<beats", "current_call"=>"[...]/vendor/bundle/jruby/2.5.0/gems/logstash-input-beats-6.0.9-java/lib/logstash/inputs/beats.rb:197:in `run'"}], ["LogStash::Filters::Mutate", {"remove_field"=>["agent", "host"], "id"=>"df6bb998587313a3f737f399367d9ac0bbd9a962a64828c64ee0df7680f2f430"}]=>[{"thread_id"=>35, "name"=>"[dem_logging_test]>worker0", "current_call"=>"[...]/vendor/bundle/jruby/2.5.0/gems/stud-0.0.23/lib/stud/interval.rb:89:in `sleep'"}, {"thread_id"=>37, "name"=>"[dem_logging_test]>worker1", "current_call"=>"[...]/vendor/bundle/jruby/2.5.0/gems/stud-0.0.23/lib/stud/interval.rb:89:in `sleep'"}, {"thread_id"=>39, "name"=>"[dem_logging_test]>worker2", "current_call"=>"[...]/vendor/bundle/jruby/2.5.0/gems/stud-0.0.23/lib/stud/interval.rb:89:in `sleep'"}, {"thread_id"=>40, "name"=>"[dem_logging_test]>worker3", "current_call"=>"[...]/vendor/bundle/jruby/2.5.0/gems/stud-0.0.23/lib/stud/interval.rb:89:in `sleep'"}]}}
[2020-09-15T15:30:50,963][WARN ][org.logstash.execution.ShutdownWatcherExt] {"inflight_count"=>0, "stalling_threads_info"=>{"other"=>[{"thread_id"=>43, "name"=>"[dem_logging_test]<beats", "current_call"=>"[...]/vendor/bundle/jruby/2.5.0/gems/logstash-input-beats-6.0.9-java/lib/logstash/inputs/beats.rb:197:in `run'"}], ["LogStash::Filters::Mutate", {"remove_field"=>["agent", "host"], "id"=>"df6bb998587313a3f737f399367d9ac0bbd9a962a64828c64ee0df7680f2f430"}]=>[{"thread_id"=>35, "name"=>"[dem_logging_test]>worker0", "current_call"=>"[...]/vendor/bundle/jruby/2.5.0/gems/stud-0.0.23/lib/stud/interval.rb:89:in `sleep'"}, {"thread_id"=>37, "name"=>"[dem_logging_test]>worker1", "current_call"=>"[...]/vendor/bundle/jruby/2.5.0/gems/stud-0.0.23/lib/stud/interval.rb:89:in `sleep'"}, {"thread_id"=>39, "name"=>"[dem_logging_test]>worker2", "current_call"=>"[...]/vendor/bundle/jruby/2.5.0/gems/stud-0.0.23/lib/stud/interval.rb:89:in `sleep'"}, {"thread_id"=>40, "name"=>"[dem_logging_test]>worker3", "current_call"=>"[...]/vendor/bundle/jruby/2.5.0/gems/stud-0.0.23/lib/stud/interval.rb:89:in `sleep'"}]}}
[2020-09-15T15:30:55,980][WARN ][org.logstash.execution.ShutdownWatcherExt] {"inflight_count"=>0, "stalling_threads_info"=>{"other"=>[{"thread_id"=>43, "name"=>"[dem_logging_test]<beats", "current_call"=>"[...]/vendor/bundle/jruby/2.5.0/gems/logstash-input-beats-6.0.9-java/lib/logstash/inputs/beats.rb:197:in `run'"}], ["LogStash::Filters::Mutate", {"remove_field"=>["agent", "host"], "id"=>"df6bb998587313a3f737f399367d9ac0bbd9a962a64828c64ee0df7680f2f430"}]=>[{"thread_id"=>35, "name"=>"[dem_logging_test]>worker0", "current_call"=>"[...]/vendor/bundle/jruby/2.5.0/gems/stud-0.0.23/lib/stud/interval.rb:89:in `sleep'"}, {"thread_id"=>37, "name"=>"[dem_logging_test]>worker1", "current_call"=>"[...]/vendor/bundle/jruby/2.5.0/gems/stud-0.0.23/lib/stud/interval.rb:89:in `sleep'"}, {"thread_id"=>39, "name"=>"[dem_logging_test]>worker2", "current_call"=>"[...]/vendor/bundle/jruby/2.5.0/gems/stud-0.0.23/lib/stud/interval.rb:89:in `sleep'"}, {"thread_id"=>40, "name"=>"[dem_logging_test]>worker3", "current_call"=>"[...]/vendor/bundle/jruby/2.5.0/gems/stud-0.0.23/lib/stud/interval.rb:89:in `sleep'"}]}}
I want to stop this logstash service, There is log which says dead url, no live connection, in-flight data but right now I want to just shut it down.
Any ideas? and Any ideas how can I avoid it in future.
Many thanks
I have log files that are broken down into between 1 and 4 "Tasks". In each "Task" there are sections for "WU Name" and "estimated CPU time remaining". Ultimately, I want to the bash script output to look like this 3 Task example;
Task 1 Mini_Protein_binds_COVID-19_boinc_ 0d:7h:44m:28s
Task 2 shapeshift_pair6_msd4X_4_f_e0_161_ 0d:4h:14m:22s
Task 3 rep730_0078_symC_reordered_0002_pr 1d:1h:38m:41s
So far; I can count the Tasks in the log. I can isolate x number of characters I want from the "WU Name". I can convert the "estimated CPU time remaining" in seconds to days:hours:minutes:seconds. And I can output all of that into 'pretty' columns. Problem is that I can only process 1 Task using;
# Initialize counter
counter=1
# Count how many iterations
cnt_wu=`grep -c "WU name:" /mnt/work/sec-conv/bnc-sample3.txt`
# Iterate the loop for cnt-wu times
while [ $counter -le ${cnt_wu} ]
do
core_cnt=$counter
wu=`cat /mnt/work/sec-conv/bnc-sample3.txt | grep -Po 'WU name: \K.*' | cut -c1-34`
sec=`cat /mnt/work/sec-conv/bnc-sample3.txt | grep -Po 'estimated CPU time remaining: \K.*' | cut -f1 -d"."`
dhms=`printf '%dd:%dh:%dm:%ds\n' $(($sec/86400)) $(($sec%86400/3600)) $(($sec%3600/60)) \ $(($sec%60))`
echo "Task ${core_cnt}" $'\t' $wu $'\t' $dhms | column -ts $'\t'
counter=$((counter + 1))
done
Note: /mnt/work/sec-conv/bnc-sample3.txt is a static one Task sample only used for this scripts dev.
What I can't figure out is the next step which is to be able to process x number of multiple Tasks. I can't figure out how to leverage the while/counter combination properly, and can't figure out how to increment through the occurrences of Tasks.
Adding bnc-sample.txt (contains 3 Tasks)
1) -----------
name: Rosetta#home
master URL: https://boinc.bakerlab.org/rosetta/
user_name: XXXXXXX
team_name:
resource share: 100.000000
user_total_credit: 10266.993660
user_expavg_credit: 512.420495
host_total_credit: 10266.993660
host_expavg_credit: 512.603669
nrpc_failures: 0
master_fetch_failures: 0
master fetch pending: no
scheduler RPC pending: no
trickle upload pending: no
attached via Account Manager: no
ended: no
suspended via GUI: no
don't request more work: no
disk usage: 0.000000
last RPC: Wed Jun 10 15:55:29 2020
project files downloaded: 0.000000
GUI URL:
name: Message boards
description: Correspond with other users on the Rosetta#home message boards
URL: https://boinc.bakerlab.org/rosetta/forum_index.php
GUI URL:
name: Your account
description: View your account information
URL: https://boinc.bakerlab.org/rosetta/home.php
GUI URL:
name: Your tasks
description: View the last week or so of computational work
URL: https://boinc.bakerlab.org/rosetta/results.php?userid=XXXXXXX
jobs succeeded: 117
jobs failed: 0
elapsed time: 2892439.609931
cross-project ID: 3538b98e5f16a958a6bdd2XXXXXXXXX
======== Tasks ========
1) -----------
name: shapeshift_pair6_msd4X_4_f_e0_161_X_0001_0001_fragments_abinitio_SAVE_ALL_OUT_946179_730_0
WU name: shapeshift_pair6_msd4X_4_f_e0_161_X_0001_0001_fragments_abinitio_SAVE_ALL_OUT_946179_730
project URL: https://boinc.bakerlab.org/rosetta/
received: Mon Jun 8 09:58:08 2020
report deadline: Thu Jun 11 09:58:08 2020
ready to report: no
state: downloaded
scheduler state: scheduled
active_task_state: EXECUTING
app version num: 420
resources: 1 CPU
estimated CPU time remaining: 26882.771040
slot: 1
PID: 28434
CPU time at last checkpoint: 3925.896000
current CPU time: 4314.761000
fraction done: 0.066570
swap size: 431 MB
working set size: 310 MB
2) -----------
name: rep730_0078_symC_reordered_0002_propagated_0001_0001_0001_A_v9_fold_SAVE_ALL_OUT_946618_54_0
WU name: rep730_0078_symC_reordered_0002_propagated_0001_0001_0001_A_v9_fold_SAVE_ALL_OUT_946618_54
project URL: https://boinc.bakerlab.org/rosetta/
received: Mon Jun 8 09:58:08 2020
report deadline: Thu Jun 11 09:58:08 2020
ready to report: no
state: downloaded
scheduler state: scheduled
active_task_state: EXECUTING
app version num: 420
resources: 1 CPU
estimated CPU time remaining: 26412.937920
slot: 2
PID: 28804
CPU time at last checkpoint: 3829.626000
current CPU time: 3879.975000
fraction done: 0.082884
swap size: 628 MB
working set size: 513 MB
3) -----------
name: Mini_Protein_binds_COVID-19_boinc_site3_2_SAVE_ALL_OUT_IGNORE_THE_REST_0aw6cb3u_944116_2_0
WU name: Mini_Protein_binds_COVID-19_boinc_site3_2_SAVE_ALL_OUT_IGNORE_THE_REST_0aw6cb3u_944116_2
project URL: https://boinc.bakerlab.org/rosetta/
received: Mon Jun 8 09:58:47 2020
report deadline: Thu Jun 11 09:58:46 2020
ready to report: no
state: downloaded
scheduler state: scheduled
active_task_state: EXECUTING
app version num: 420
resources: 1 CPU
estimated CPU time remaining: 27868.559616
slot: 0
PID: 30988
CPU time at last checkpoint: 1265.356000
current CPU time: 1327.603000
fraction done: 0.032342
swap size: 792 MB
working set size: 668 MB
Again, I appreciate any guidance!
Trying to do a "curl http://localhost:9200" but getting "Failed connection refused" Firewalld is off and elasticsearch.yml file settings are set to default. Below is a portion of the yml file.
# ----------------------------------- Paths ------------------------------------
#
# Path to directory where to store the data (separate multiple locations by comma):
#
path.data: /var/log/elasticsearch
#
# Path to log files:
#
path.logs: /var/data/elasticsearch
#
# ---------------------------------- Network -----------------------------------
#
# Set the bind address to a specific IP (IPv4 or IPv6):
#
#network.host: 192.168.0.1
#
# Set a custom port for HTTP:
#
#http.port: 9200
#
# For more information, consult the network module documentation.
Below is a tail of the elasticsearch.log file:
[2018-03-29T07:06:02,094][INFO ][o.e.c.s.MasterService ] [TBin_UP] zen-disco-elected-as-master ([0] nodes joined), reason: new_master {TBin_UP}{TBin_UPRQ3mPvlpCkCeZcw}{-F76gFi0T2aqmf9MYJXt9A}{127.0.0.1}{127.0.0.1:9300}
[2018-03-29T07:06:02,105][INFO ][o.e.c.s.ClusterApplierService] [TBin_UP] new_master {TBin_UP}{TBin_UPRQ3mPvlpCkCeZcw}{-F76gFi0T2aqmf9MYJXt9A}{127.0.0.1}{127.0.0.1:9300}, reason: apply cluster state (from master [master {TBin_UP}{TBin_UPRQ3mPvlpCkCeZcw}{-F76gFi0T2aqmf9MYJXt9A}{127.0.0.1}{127.0.0.1:9300} committed version [1] source [zen-disco-elected-as-master ([0] nodes joined)]])
[2018-03-29T07:06:02,148][INFO ][o.e.g.GatewayService ] [TBin_UP] recovered [0] indices into cluster_state
[2018-03-29T07:06:02,155][INFO ][o.e.h.n.Netty4HttpServerTransport] [TBin_UP] publish_address {127.0.0.1:9200}, bound_addresses {[::1]:9200}, {127.0.0.1:9200}
[2018-03-29T07:06:02,155][INFO ][o.e.n.Node ] [TBin_UP] started
[2018-03-29T07:06:02,445][INFO ][o.e.m.j.JvmGcMonitorService] [TBin_UP] [gc][14] overhead, spent [300ms] collecting in the last [1s]
[2018-03-29T07:14:50,259][INFO ][o.e.n.Node ] [TBin_UP] stopping ...
[2018-03-29T07:14:50,598][INFO ][o.e.n.Node ] [TBin_UP] stopped
[2018-03-29T07:14:50,598][INFO ][o.e.n.Node ] [TBin_UP] closing ...
[2018-03-29T07:14:50,620][INFO ][o.e.n.Node ] [TBin_UP] closed
Service status:
● elasticsearch.service - Elasticsearch
Loaded: loaded (/usr/lib/systemd/system/elasticsearch.service; enabled; vendor preset: disabled)
Active: failed (Result: exit-code) since Thu 2018-03-29 08:05:46 EDT; 2min 38s ago
Docs: http://www.elastic.co
Process: 22384 ExecStart=/usr/share/elasticsearch/bin/elasticsearch -p ${PID_DIR}/elasticsearch.pid --quiet (code=exited, status=1/FAILURE)
Main PID: 22384 (code=exited, status=1/FAILURE)
Mar 29 08:05:36 satyr elasticsearch[22384]: 2018-03-29 08:05:36,668 main ERROR Null object returned for RollingFile in Appenders.
Mar 29 08:05:36 satyr elasticsearch[22384]: 2018-03-29 08:05:36,669 main ERROR Null object returned for RollingFile in Appenders.
Mar 29 08:05:36 satyr elasticsearch[22384]: 2018-03-29 08:05:36,669 main ERROR Null object returned for RollingFile in Appenders.
Mar 29 08:05:36 satyr elasticsearch[22384]: 2018-03-29 08:05:36,670 main ERROR Unable to locate appender "rolling" for logger config "root"
Mar 29 08:05:36 satyr elasticsearch[22384]: 2018-03-29 08:05:36,671 main ERROR Unable to locate appender "index_indexing_slowlog_rolling" for logger config "index.indexing.slowlog.index"
Mar 29 08:05:36 satyr elasticsearch[22384]: 2018-03-29 08:05:36,671 main ERROR Unable to locate appender "index_search_slowlog_rolling" for logger config "index.search.slowlog"
Mar 29 08:05:36 satyr elasticsearch[22384]: 2018-03-29 08:05:36,672 main ERROR Unable to locate appender "deprecation_rolling" for logger config "org.elasticsearch.deprecation"
Mar 29 08:05:46 satyr systemd[1]: elasticsearch.service: main process exited, code=exited, status=1/FAILURE
Mar 29 08:05:46 satyr systemd[1]: Unit elasticsearch.service entered failed state.
Mar 29 08:05:46 satyr systemd[1]: elasticsearch.service failed.
On a RHEL6 system, I followed the steps laid out here to create a repository and capture a snapshot prior to my upgrade. I verified the existence of the snap shot:
curl 'localhost:9200/_snapshot/_all?pretty=true'
Which gave me the following result:
{ "upgrade_backup" : {
"type" : "fs",
"settings" : {
"compress" : "true",
"location" : "/tmp/elasticsearch-backup"
} } }
After upgrading Elasticsearch via yum, I went to restore my snapshot but none are showing up:
curl 'localhost:9200/_snapshot/_all?pretty=true'
{ }
I checked on the file system and see the repository files:
ls -lrt /tmp/elasticsearch-backup
total 24
-rw-r--r--. 1 elasticsearch elasticsearch 121 Apr 7 14:42 meta-snapshot-number-one.dat
drwxr-xr-x. 3 elasticsearch elasticsearch 21 Apr 7 14:42 indices
-rw-r--r--. 1 elasticsearch elasticsearch 191 Apr 7 14:42 snap-snapshot-number-one.dat
-rw-r--r--. 1 elasticsearch elasticsearch 37 Apr 7 14:42 index
-rw-r--r--. 1 elasticsearch elasticsearch 188 Apr 7 14:51 index-0
-rw-r--r--. 1 elasticsearch elasticsearch 8 Apr 7 14:51 index.latest
-rw-r--r--. 1 elasticsearch elasticsearch 29 Apr 7 14:51 incompatible-snapshots
I made sure elasticsearch.yml still has the "data.repo" tag, so I'm not sure where to look or what to do to determine what happened, but somehow my snapshots vanished!
You need to add following line to elasticsearch.yml:
path.repo: ["/tmp/elasticsearch-backup"]
Then restart Elastic service and create a new snapshots repository:
curl -XPUT "http://localhost:92000/_snapshot/backup" -H 'Content-Type: application/json' -d '{
"type": "fs",
"settings": {
"location": "/tmp/elasticsearch-backup",
"compress": true
}
}'
Now you should be able to list all snapshots in your repository and eventually restore them:
curl -s -XGET "localhost:9200/_snapshot/backup/_all" | jq .
I'm trying to use logging.yml ( Elasticsearch file ) + logrotate configuration for elasticsearch log rotation .
Information :
1 . Elasticsearch version - 1.7.4
I don't want to keep any rotated files ...
Configuration :
logging.yml configuration :
file:
type: org.apache.log4j.rolling.RollingFileAppender
file: ${path.logs}/${cluster.name}.log
rollingPolicy: org.apache.log4j.rolling.TimeBasedRollingPolicy
rollingPolicy.FileNamePattern: ${path.logs}/${cluster.name}.log.%d{yyyy-MM-dd}.gz
layout:
type: pattern
conversionPattern: "[%d{ISO8601}][%-5p][%-25c] %m%n"
Logrotate configuration :
/var/log/elasticsearch/*.log {
daily
rotate 0
copytruncate
compress
delaycompress
missingok
notifempty
maxage 0
create 644 elasticsearch elasticsearch
}
More details :
Running ls on /var/log/elasticsearch :
total 20K
-rw-r--r-- 1 elasticsearch elasticsearch 18763 Jul 4 08:46 dba01es.d1.log
-rw-r--r-- 1 elasticsearch elasticsearch 0 Jun 19 10:01 dba01es.d1_index_indexing_slowlog.log
-rw-r--r-- 1 elasticsearch elasticsearch 0 Jun 19 10:01 dba01es.d1_index_search_slowlog.log
Running manually logrotate :
logrotate -fv /etc/logrotate.d/elasticsearch
logrotate output :
reading config file /etc/logrotate.d/elasticsearch
reading config info for /var/log/elasticsearch/*.log
Handling 1 logs
rotating pattern: /var/log/elasticsearch/*.log forced from command line (no old logs will be kept)
empty log files are not rotated, old logs are removed
considering log /var/log/elasticsearch/dba01es.d1.log
log needs rotating
considering log /var/log/elasticsearch/dba01es.d1_index_indexing_slowlog.log
log does not need rotating
considering log /var/log/elasticsearch/dba01es.d1_index_search_slowlog.log
log does not need rotating
rotating log /var/log/elasticsearch/dba01es.d1.log, log->rotateCount is 0
dateext suffix '-20160704'
glob pattern '-[0-9][0-9][0-9][0-9][0-9][0-9][0-9][0-9]'
previous log /var/log/elasticsearch/dba01es.d1.log.1 does not exist
renaming /var/log/elasticsearch/dba01es.d1.log.1.gz to /var/log/elasticsearch/dba01es.d1.log.2.gz (rotatecount 1, logstart 1, i 1),
old log /var/log/elasticsearch/dba01es.d1.log.1.gz does not exist
renaming /var/log/elasticsearch/dba01es.d1.log.0.gz to /var/log/elasticsearch/dba01es.d1.log.1.gz (rotatecount 1, logstart 1, i 0),
old log /var/log/elasticsearch/dba01es.d1.log.0.gz does not exist
log /var/log/elasticsearch/dba01es.d1.log.2.gz doesn't exist -- won't try to dispose of it
copying /var/log/elasticsearch/dba01es.d1.log to /var/log/elasticsearch/dba01es.d1.log.1
truncating /var/log/elasticsearch/dba01es.d1.log
Running ll after running logrotate manually :
total 32K
-rw-r--r-- 1 elasticsearch elasticsearch 0 Jul 4 08:48 dba01es.d1.log
-rw-r--r-- 1 elasticsearch elasticsearch 28937 Jul 4 08:48 dba01es.d1.log.1
-rw-r--r-- 1 elasticsearch elasticsearch 0 Jun 19 10:01 dba01es.d1_index_indexing_slowlog.log
-rw-r--r-- 1 elasticsearch elasticsearch 0 Jun 19 10:01 dba01es.d1_index_search_slowlog.log
My question are :
Why the dba01es.d1.log.1 file is not compressed ?
Why the rotate 0 is not working here ? and logrotate keep saving the rotate file ....
Thanks a lot !
Amit