Stop Logstash agent on inactivity? - elasticsearch

The central log server I'm working on uses two Logstash agents, each running in its own screen:
a shipper to collect logs from front servers
an indexer to send the logs into Elasticsearch
Sometimes, it can be useful to re-import some logs (on failure, to re-format the logs etc...). For this purpose, I execute a third agent called importer whose job is to re-import old logs.
The problem I'm facing is that I have to monitor the re-import processus until it's completely done and hence becomes killable.
So, I would like to know if there's some kind of option able to stop an agent on idleness.

You might be able to do something with the exec input. (http://logstash.net/docs/1.4.2/inputs/exec). I'm thinking something like cat /some/reload/file; sleep 30; kill $LS_PID not quite sure on how you'd get $LS_PID assigned though.

Related

What is the correct use case for SchedulerLock lockAtMostFor?

I am using SchedulerLock in Spring Boot And I am using 2 servers.
What I'm curious about is why is "lockAtMostFor" an option that exists?
Take an example: on one of my 2 servers, the schedule runs first and then locks.
But something went wrong while running, and my server went down.
At this moment, my scheduled task ends incompletely.
Any guide I read is full of vague answers about "lock time in case a node dies".
When a node dies, it can no longer execute schedules.
But why keep holding a LOCK for a dead node?
Even if I urgently try to manually execute the schedule on the 2nd server, it is impossible to manually execute it because of the above lock.
What are options that exist for?

GCP - creating a VM instance and extracting logs

I jave a JAVA application in which I am using GCP to create VM instances from images.
In this application, I would like to allow the user to view the vm creation logs in order to be updated on the status of the creation, and to be able to see failure points in detail.
I am sure such logs exist in GCP, but have been unable to find specific APIOs which let me see a specific action, for example creation of instance "X".
Thanks for the help
When you create a VM, the answer that you have is a JobID (because the creation take time and the Compute Engine API answer immediately). To know the status of the VM start (and creation) you have to poll regularly this JobID.
In the logs, you can also filter with this JobID to select and view only the logs that you want on the Compute API side (create/start errors).
If you want to see the logs of the VM, filter the logs not with the JobID but with the name of the VM, and its zone.
In Java, you have client libraries that help you to achieve this

OEM 13C Log File Monitoring

I have installed OEM 13c and deployed a couple of agents and want to test out the Log File Monitoring utility. I have enabled it and added a log file to monitor.
When I go and test it out, it does not show any alerts when they are put into the Log File. On the agent server, I have tailed the file and see the messages coming into the log file.
Does anyone have experience adding log files to OEM? I could have configured it wrong. Or is there any troubleshooting steps that I can follow to see if the server is even contacting the agent for reading the log file. Status of the agent is good with no incidents.
Without access to the system, it would be difficult to tell you the exact cause of this issue. However, I can list a few potential causes of this issue that I have experienced personally:
Permissions. The Oracle Enterprise Manager Agent is very convoluted when it comes to system permissions within a remote server. The agent can be owned and run as any number of users but during metric evaluation, may also need sudo or pam-authentication permissions to access certain entities on the server. Depending on the authentication profiles on that server, this could be the cause of your issue. There are ways to grant the agent access through the PAM stack if that is necessary.
Syntax. The wildcard syntax in the OEM GUI can be a little confusing as well. I would play with the wildcard elements a bit on the "String" component to ensure that it isn't as simple as adding wildcards to the beginning and end of the string. Without diving into the binaries of the agent plugins, it is difficult to assess exactly how the agent is evaluating this particular metric
One suggestion I would have is to go through the agent commands. There are specific commands you can run to manually force an agent to evaluate a particular metric for a particular target. This can allow you to manually trigger the metric collection locally on the server and evaluate what exactly is being performed at the agent level.
On the system I was running (12c) the command was as follows:
emctl control agent runCollection <hostname>:host host_storage

Logging for two different environment logs in to a single log file

I am quite new for log4j2 logger and my requirement to write a log from application server and web server.
I am having two different environment on which J BOSS server is deployed.
Now I am having a log file on web server environment which is writing logs for errors and I want to write logs from application server also in same file.
Please suggest.
If you want the logs to be integrated together you should use a solution like Splunk or Elastic Search/Logstash/Kibana (ELK).
When you try to write to a file from 2 different processes your file will get corrupted unless you use file locking. However, your throughput will decrease significantly and it isn't supported for rolling files. So the best approach is to send the logs to a single process where they can be aggregated.

In rsyslog, how can I trigger an hourly email aggregating entries in one of our logs?

One of our apps has been configured to log certain errors to a log on a remote server using rsyslog. I've been asked to provide an hourly email that lists the errors logged within the last hour. I've looked at ommail, but it doesn't seem to do exactly this. Any suggestions on how best to do this?
I would go low tech on this:
put error messages in a separate file like
*.error /var/log/error.log
then rotate it hourly via logrotate
From logrotate, you can run a script in the prerotate or postrotate part, where you can take the contents of the file and send them via Email.
ommail is more for sending logs matching a certain filter, so it would be hacky to make it send such "digests".

Resources