Elasticsearch 1.7.4 log rotation - elasticsearch

I'm trying to use logging.yml ( Elasticsearch file ) + logrotate configuration for elasticsearch log rotation .
Information :
1 . Elasticsearch version - 1.7.4
I don't want to keep any rotated files ...
Configuration :
logging.yml configuration :
file:
type: org.apache.log4j.rolling.RollingFileAppender
file: ${path.logs}/${cluster.name}.log
rollingPolicy: org.apache.log4j.rolling.TimeBasedRollingPolicy
rollingPolicy.FileNamePattern: ${path.logs}/${cluster.name}.log.%d{yyyy-MM-dd}.gz
layout:
type: pattern
conversionPattern: "[%d{ISO8601}][%-5p][%-25c] %m%n"
Logrotate configuration :
/var/log/elasticsearch/*.log {
daily
rotate 0
copytruncate
compress
delaycompress
missingok
notifempty
maxage 0
create 644 elasticsearch elasticsearch
}
More details :
Running ls on /var/log/elasticsearch :
total 20K
-rw-r--r-- 1 elasticsearch elasticsearch 18763 Jul 4 08:46 dba01es.d1.log
-rw-r--r-- 1 elasticsearch elasticsearch 0 Jun 19 10:01 dba01es.d1_index_indexing_slowlog.log
-rw-r--r-- 1 elasticsearch elasticsearch 0 Jun 19 10:01 dba01es.d1_index_search_slowlog.log
Running manually logrotate :
logrotate -fv /etc/logrotate.d/elasticsearch
logrotate output :
reading config file /etc/logrotate.d/elasticsearch
reading config info for /var/log/elasticsearch/*.log
Handling 1 logs
rotating pattern: /var/log/elasticsearch/*.log forced from command line (no old logs will be kept)
empty log files are not rotated, old logs are removed
considering log /var/log/elasticsearch/dba01es.d1.log
log needs rotating
considering log /var/log/elasticsearch/dba01es.d1_index_indexing_slowlog.log
log does not need rotating
considering log /var/log/elasticsearch/dba01es.d1_index_search_slowlog.log
log does not need rotating
rotating log /var/log/elasticsearch/dba01es.d1.log, log->rotateCount is 0
dateext suffix '-20160704'
glob pattern '-[0-9][0-9][0-9][0-9][0-9][0-9][0-9][0-9]'
previous log /var/log/elasticsearch/dba01es.d1.log.1 does not exist
renaming /var/log/elasticsearch/dba01es.d1.log.1.gz to /var/log/elasticsearch/dba01es.d1.log.2.gz (rotatecount 1, logstart 1, i 1),
old log /var/log/elasticsearch/dba01es.d1.log.1.gz does not exist
renaming /var/log/elasticsearch/dba01es.d1.log.0.gz to /var/log/elasticsearch/dba01es.d1.log.1.gz (rotatecount 1, logstart 1, i 0),
old log /var/log/elasticsearch/dba01es.d1.log.0.gz does not exist
log /var/log/elasticsearch/dba01es.d1.log.2.gz doesn't exist -- won't try to dispose of it
copying /var/log/elasticsearch/dba01es.d1.log to /var/log/elasticsearch/dba01es.d1.log.1
truncating /var/log/elasticsearch/dba01es.d1.log
Running ll after running logrotate manually :
total 32K
-rw-r--r-- 1 elasticsearch elasticsearch 0 Jul 4 08:48 dba01es.d1.log
-rw-r--r-- 1 elasticsearch elasticsearch 28937 Jul 4 08:48 dba01es.d1.log.1
-rw-r--r-- 1 elasticsearch elasticsearch 0 Jun 19 10:01 dba01es.d1_index_indexing_slowlog.log
-rw-r--r-- 1 elasticsearch elasticsearch 0 Jun 19 10:01 dba01es.d1_index_search_slowlog.log
My question are :
Why the dba01es.d1.log.1 file is not compressed ?
Why the rotate 0 is not working here ? and logrotate keep saving the rotate file ....
Thanks a lot !
Amit

Related

How Cloud-init can be stopped on first error

When I start a linux server with Cloud-init, I have a few scripts in /etc/cloud/cloud.cfg.d/ and they run in reverse alphabetical order
# ll /etc/cloud/cloud.cfg.d/
total 28
-rw-r--r-- 1 root root 173 Dec 10 12:38 00-cloudinit-lifecycle-hook.cfg
-rw-r--r-- 1 root root 2120 Jun 1 2021 05_logging.cfg
-rw-r--r-- 1 root root 590 Oct 26 17:55 10_aws_yumvars.cfg
-rw-r--r-- 1 root root 29 Dec 1 18:22 20_amazonlinux_repo_https.cfg
-rw-r--r-- 1 root root 586 Dec 10 12:38 50-cloudinit-tomcat.cfg
-rw-r--r-- 1 root root 585 Dec 10 12:40 60-cloudinit-newrelic.cfg
The last to execute is 00-cloudinit-lifecycle-hook.cfg, in which I complete the lifecycle for the Auto Scaling Group with a CONTINUE. The ASG fails if it doesn't receive this signal after a given time out.
The issue is that even if there's an error in 50-cloudinit-tomcat.cfg, it still runs 00-cloudinit-lifecycle-hook.cfg instead of stopping
How can I ensure cloud-init stops and never reaches the last script? I would like the ASG to never receive the CONTINUE signal if there's any error.
Here are the files:
EC2 instance user-data:
#cloud-config
bootcmd:
- [cloud-init-per, once, "app-volume", mkfs, -t, "ext4", "/dev/nvme1n1"]
mounts:
- ["/dev/nvme1n1", "/app-volume", "ext4", "defaults,nofail", "0", "0"]
merge_how:
- name: list
settings: [append]
- name: dict
settings: [no_replace, recurse_list]
50-cloudinit-tomcat.cfg
#cloud-config
merge_how:
- name: list
settings: [append]
- name: dict
settings: [no_replace, recurse_list]
runcmd:
- "#!/bin/bash -e"
- set +x
- echo ' '
- echo '# ===================================='
- echo '# Tomcat Cloud Init '
- echo '# /etc/cloud/cloud.cfg.d/'
- echo '# ===================================='
- echo ' '
- echo '#===================================='
- echo '# Run Ansible'
- echo '#===================================='
- echo ' '
- set -x
- ansible-playbook /opt/init-config/tomcat/tomcat-config.yaml
when I run ansible-playbook /opt/init-config/tomcat/tomcat-config.yaml directly in the instance I get an error, and I know it returns 2
ansible-playbook /opt/init-config/tomcat/tomcat-config.yaml #shows errors
echo $? # shows "2"
00-cloudinit-lifecycle-hook.cfg
#cloud-config
merge_how:
- name: list
settings: [append]
- name: dict
settings: [no_replace, recurse_list]
runcmd:
- "/opt/lifecycles/lifecycle-hook-continue.sh"
An alternative I can think of, is to send a ABANDON signal instead of CONTINUE as soon as there's en error in one of the cloud-init config. But I can't find in the documentation on to define if there's an error

unable to backup elastic search data?

Hi I am having an elastic search (version 6.6.0) running on a machine . It has some indexes .
curl -X GET "10.10.9.1:9200/_cat/indices/mep-reports*?v&s=index&pretty"
health status index uuid pri rep docs.count docs.deleted store.size pri.store.size
yellow open mep-reports-2019.09.11 l6iFm9fSTp6Q07Qa8BsB-w 1 1 149002 1065 13.6mb 13.6mb
yellow open mep-reports-2019.09.13 lX3twLgnThKUcOoF3B1vbw 1 1 80079 3870 10.1mb 10.1mb
yellow open mep-reports-2019.09.18 NzHFBXIASIifRpmlrWQmmg 1 1 283066 164 25.9mb 25.9mb
yellow open mep-reports-2019.09.20 UB3uCEouSAOAsy96AVz__g 1 1 22002 2 1.8mb 1.8mb
yellow open mep-reports-2019.09.23 VXI7K7SFS-Ol_FoHinuY3A 1 1 269836 2632 19.8mb 19.8mb
yellow open mep-reports-2019.09.25 yd6PUSA2Snug-1BAUZICzw 1 1 200001 1972 13.5mb 13.5mb
yellow open mep-reports-2019.10.01 ji0BqsTQRmm-rIKCd2pg_Q 1 1 5000 790 467kb 467kb
yellow open mep-reports-2019.10.10 rt3kb2VFTH6XLiqrIvpEow 1 1 5000 790 450.6kb 450.6kb
yellow open mep-reports-2019.10.17 ws3zILaySwu69U16dKSQlw 1 1 27 9 24.4kb 24.4kb
yellow open mep-reports-2019.10.24 iKc8ruqWTBCsYz83k6NpHg 1 1 2500 540 276.8kb 276.8kb
close mep-reports-2019.10.30 Qrq98yUeS_yvCwzDoQHb3A
yellow open mep-reports-2020.02.10 upBGvHxnTdaxHP52N8fEPg 1 1 56000 3260 5.3mb 5.3mb
yellow open mep-reports-2020.02.11 GfTOrlHBSJKKToHh3u4jnQ 1 1 500 0 43.4kb 43.4kb
I would like to take a data backup and populate that in my local elastic search instance. for that i have tried the following
curl -X PUT "10.10.9.1:9200/_snapshot/my_backup?pretty" -H 'Content-Type: application/json' -d'
{
"type": "fs",
"settings": {
"location": "/tmp/es-backup"
}
}'
it then returns
{
"acknowledged" : true
}
how ever when i tries to list the back up folder it is empty .
ls -ltra /tmp/es-backup
total 4
drwxrwxrwt 1 root root 4096 Feb 12 11:11 ..
drwxrwxr-x 2 omn omn 6 Feb 12 11:46 .
really appreciate any help
thank you
Try to use elasticdump to transfer data from one elastic to another.
You can also output your data into a .json file.
A working example:
Export your indices (data) from a remote elastic server to a .json file.
elasticdump --input=http://10.10.9.1:9200 --output=data.json --type=data
Import data.json (located in your local machine) to your local elastic server.
elasticdump --input=data.json --output=http://localhost:9200 --type=data
Try it.
Hope this helps

Elasticserach not creating the Indice for new pipeline via logstash

I have set-up a ELK but I see elasticsearch not creating the Index and unable to upload the data, Service Elasticsearch and Logstash both are running..
Below is the details.. However I do not see anything on he logs.
Elastic config:
[root#aruba-elk2 rm_logs]# cat /etc/elasticsearch/elasticsearch.yml
# Elasticserach config
#########################
cluster.name: log-cohort-test
node.name: aruba-elk2
node.master: true
path:
data: /elk/lib/elasticsearch
logs: /var/log/elasticsearch
network.host: 0.0.0.0
http.port: 9200
bootstrap.system_call_filter: False
[root#aruba-elk2 rm_logs]#
[root#aruba-elk2 rm_logs]#
LOGSTASH COnfig:
[root#aruba-elk2 rm_logs]# cat /etc/logstash/logstash.yml
path.data: /var/lib/logstash
path.logs: /var/log/logstash
[root#aruba-elk2 rm_logs]# cat /etc/logstash/conf.d/logstash-syslog.conf
input {
file {
path => [ "/elk/rm_logs/*.txt" ]
type => "rmlog"
}
}
filter {
if [type] == "rmlog" {
grok {
match => { "message" => "%{HOSTNAME:hostname},%{DATE:date},%{HOUR:hour1}:%{MINUTE:minute1},%{NUMBER}-%{WORD},%{USER:user},%{USER:user2} %{NUMBER:pid} %{NUMBER:float} %{NUMBER:float} %{NUMBER:number1} %{NUMBER:number2} %{DATA} %{HOUR:hour2}:%{MINUTE:minute2} %{HOUR:hour3}:%{MINUTE:minute3} %{GREEDYDATA:command},%{PATH:path}" }
add_field => [ "received_at", "%{#timestamp}" ]
}
}
}
output {
if [type] == "rmlog" {
elasticsearch {
hosts => ["aruba-elk2:9200"]
manage_template => false
index => "rmlog-%{+YYYY.MM.dd}"
#document_type => "messages"
}
}
}
Input data Source:
[root#aruba-elk2 rm_logs]# cd /elk/rm_logs/
[root#aruba-elk2 rm_logs]# ls -ltrh | head
total 2.6M
-rw-r--r-- 1 root root 558 Jan 11 11:27 dbxchw092.txt
-rw-r--r-- 1 root root 405 Jan 11 11:27 dbxtx220.txt
-rw-r--r-- 1 root root 241 Jan 11 11:27 dbxcvm139.txt
-rw-r--r-- 1 root root 455 Jan 11 11:27 dbxcnl038.txt
-rw-r--r-- 1 root root 230 Jan 11 11:27 dbxchw052.txt
-rw-r--r-- 1 root root 143 Jan 11 11:27 dbxtx222.txt
-rw-r--r-- 1 root root 577 Jan 11 11:27 dbxtx224.txt
-rw-r--r-- 1 root root 274 Jan 11 11:27 dbxcvm082.txt
-rw-r--r-- 1 root root 281 Jan 11 11:27 dbxcsb003.txt
Sample of above data file:
testhost-in2,19/01/11,06:34,04-mins,arnav,arnav 2427 0.1 0.0 58980 580 ? S 06:30 0:00 rm -rf /test/ehf/users/arnav-090119-184844,/dv/ehf/users/arnav-090119-
testhost-in2,19/01/11,06:40,09-mins,arnav,arnav 2427 0.1 0.0 58980 580 ? S 06:30 0:00 rm -rf /dv/ehf/users/arnav-090119-184844,/dv/ehf/users/arnav-090119-\
testhost-in2,19/01/11,06:45,14-mins,arnav,arnav 2427 0.1 0.0 58980 580 ? S 06:30 0:01 rm -rf /
LOGS:
Logstash logs:
[root#aruba-elk2 logstash]# cat logstash-plain.log
[2019-01-12T23:48:31,653][INFO ][logstash.runner ] Starting Logstash {"logstash.version"=>"6.5.4"}
[2019-01-12T23:48:34,959][INFO ][logstash.pipeline ] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>48, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50}
[2019-01-12T23:48:35,374][INFO ][logstash.outputs.elasticsearch] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://aruba-elk2:9200/]}}
[2019-01-12T23:48:35,588][WARN ][logstash.outputs.elasticsearch] Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"http://aruba-elk2:9200/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [http://aruba-elk2:9200/][Manticore::SocketException] Connection refused"}
[2019-01-12T23:48:35,608][INFO ][logstash.outputs.elasticsearch] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["//aruba-elk2:9200"]}
[2019-01-12T23:48:36,063][INFO ][logstash.inputs.file ] No sincedb_path set, generating one based on the "path" setting {:sincedb_path=>"/var/lib/logstash/plugins/inputs/file/.sincedb_076330d5fd2c2b811bc1960a3d0547be", :path=>["/elk/rm_logs/*.txt"]}
[2019-01-12T23:48:36,095][INFO ][logstash.pipeline ] Pipeline started successfully {:pipeline_id=>"main", :thread=>"#<Thread:0x424bb675 run>"}
[2019-01-12T23:48:36,155][INFO ][filewatch.observingtail ] START, creating Discoverer, Watch with file and sincedb collections
[2019-01-12T23:48:36,156][INFO ][logstash.agent ] Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]}
[2019-01-12T23:48:36,542][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600}
[2019-01-12T23:48:40,796][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>"http://aruba-elk2:9200/"}
[2019-01-12T23:48:40,855][INFO ][logstash.outputs.elasticsearch] ES Output version determined {:es_version=>6}
[2019-01-12T23:48:40,859][WARN ][logstash.outputs.elasticsearch] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>6}
Elasticsearch LOGS:
[root#aruba-elk2 elasticsearch]# cat gc.log.0.current| tail
2019-01-13T00:13:29.280+0530: 1237.781: Total time for which application threads were stopped: 0.0002681 seconds, Stopping threads took: 0.0000316 seconds
2019-01-13T00:13:31.281+0530: 1239.782: Total time for which application threads were stopped: 0.0003670 seconds, Stopping threads took: 0.0000586 seconds
2019-01-13T00:13:32.281+0530: 1240.782: Total time for which application threads were stopped: 0.0003134 seconds, Stopping threads took: 0.0000708 seconds
2019-01-13T00:13:37.282+0530: 1245.783: Total time for which application threads were stopped: 0.0004663 seconds, Stopping threads took: 0.0001315 seconds
2019-01-13T00:13:51.284+0530: 1259.785: Total time for which application threads were stopped: 0.0004230 seconds, Stopping threads took: 0.0000691 seconds
2019-01-13T00:13:57.286+0530: 1265.787: Total time for which application threads were stopped: 0.0008421 seconds, Stopping threads took: 0.0002697 seconds
2019-01-13T00:13:58.287+0530: 1266.787: Total time for which application threads were stopped: 0.0004467 seconds, Stopping threads took: 0.0000706 seconds
2019-01-13T00:14:11.288+0530: 1279.789: Total time for which application threads were stopped: 0.0004702 seconds, Stopping threads took: 0.0001105 seconds
2019-01-13T00:14:18.289+0530: 1286.790: Total time for which application threads were stopped: 0.0004123 seconds, Stopping threads took: 0.0000750 seconds
Any help will be appreciated..

Elasticsearch respository shows no snapshots after upgrade from 2.x to 5.x

On a RHEL6 system, I followed the steps laid out here to create a repository and capture a snapshot prior to my upgrade. I verified the existence of the snap shot:
curl 'localhost:9200/_snapshot/_all?pretty=true'
Which gave me the following result:
{ "upgrade_backup" : {
"type" : "fs",
"settings" : {
"compress" : "true",
"location" : "/tmp/elasticsearch-backup"
} } }
After upgrading Elasticsearch via yum, I went to restore my snapshot but none are showing up:
curl 'localhost:9200/_snapshot/_all?pretty=true'
{ }
I checked on the file system and see the repository files:
ls -lrt /tmp/elasticsearch-backup
total 24
-rw-r--r--. 1 elasticsearch elasticsearch 121 Apr 7 14:42 meta-snapshot-number-one.dat
drwxr-xr-x. 3 elasticsearch elasticsearch 21 Apr 7 14:42 indices
-rw-r--r--. 1 elasticsearch elasticsearch 191 Apr 7 14:42 snap-snapshot-number-one.dat
-rw-r--r--. 1 elasticsearch elasticsearch 37 Apr 7 14:42 index
-rw-r--r--. 1 elasticsearch elasticsearch 188 Apr 7 14:51 index-0
-rw-r--r--. 1 elasticsearch elasticsearch 8 Apr 7 14:51 index.latest
-rw-r--r--. 1 elasticsearch elasticsearch 29 Apr 7 14:51 incompatible-snapshots
I made sure elasticsearch.yml still has the "data.repo" tag, so I'm not sure where to look or what to do to determine what happened, but somehow my snapshots vanished!
You need to add following line to elasticsearch.yml:
path.repo: ["/tmp/elasticsearch-backup"]
Then restart Elastic service and create a new snapshots repository:
curl -XPUT "http://localhost:92000/_snapshot/backup" -H 'Content-Type: application/json' -d '{
"type": "fs",
"settings": {
"location": "/tmp/elasticsearch-backup",
"compress": true
}
}'
Now you should be able to list all snapshots in your repository and eventually restore them:
curl -s -XGET "localhost:9200/_snapshot/backup/_all" | jq .

Why does `Dir[directory_path].empty?` return `false` all the time?

Dir[directory_path].empty? returns false all the time. The behavior is the same whether or not I run irb as root:
$ ll
total 12
drwxrwxrwx 2 ndefontenay ndefontenay 4096 Aug 12 12:11 ./
drwxrwxrwx 4 ndefontenay ndefontenay 4096 Aug 5 11:45 ../
-rw-rw-r-- 1 ndefontenay ndefontenay 8 Aug 12 12:11 test
$ irb
> Dir["/opt/purge_entitlement/in"].empty?
=> false
> exit
$ sudo irb
> Dir["/opt/purge_entitlement/in"].empty?
=> false
If someone could shed some light on this problem, it would be pretty helpful.
Dir[].empty? returns false all the time
It should,because it always contains the parent directory (..), and the directory itself (.),that you didn't take care of.
This is not an answer to your question, but to avoid the problem of getting . and .. in the list, use Dir.glob instead of Dir.[]. You will probably get true for this:
Dir.glob("/opt/purge_entitlement/in/*").empty?

Resources