I am running logstash forwareder to ship logs.
Forwarder,logstash,elasticsearch all are on localhost.
I have one UI application whose log files is read by shipper. When forwarder is running, archiving of log file doesn't work. logs are appended in same file. I have configured log file to archive every minute, so I can see the change. As soon as I stop forwarder, log file archiving starts working.
I guess forwarder keep holding file handle that's why file does not get archived.
Please help me on this.
regards,
Sunil
Running on windows? There exists known unfixed bugs.
See https://github.com/elasticsearch/logstash/issues/1944
for some kind of work around.
Related
Please let me know what exactly is migration.conf file created during installation of splunk UF ?
File path : /opt/splunkforwarder/etc/system/local/migration.conf
Thanks in Advance,
NVP
Have you looked at the contents of the file? It shows what Splunk did when a new version of the forwarder was installed. When the new version runs for the first time, a number of checks are run and files may be modified or removed. The migration.conf file lists each action that was performed.
It's a good idea to review this log after each upgrade, because it may identify local changes that override new features.
In the folder <WAS Liberty Profile root>\<profile>\usr\servers\defaultServer there are many files named core.*.dmp and heapdump.*.phd. The size of these files is between 130 MB and 1.3 GB when my deployed app uses 4 MB.
Can I delete these files *.dmp and *.phd?
What are these files for?
Short answer: yes, it's safe to delete them, but you should find out why they're appearing, as it could indicate that your application is not running correctly.
If your dump files were created a long time ago, or you know you were debugging an OutOfMemoryException or have been running server javadump --include=heap,system then go ahead and delete the files. If, however, you keep getting new dump files and don't know why then read on.
The core and heapdump files contain a snapshot of the memory of the application from a specific point in time. Usually you do this to capture the state of your application at the point where something goes wrong so that you can examine it with analysis tools and try to work out what went wrong.
For example, by default the IBM JVM will perform a dump when an OutOfMemoryException is thrown. This allows you to look at the dump file and see what's using up all the memory.
If you have a corresponding javacore file, the fourth line or so should say why the memory dump was made.
e.g. 1TISIGINFO Dump Requested By User (00100000) Through com.ibm.jvm.Dump.javaDumpToFile (caused by running server javadump)
or 1TISIGINFO Dump Event "user" (00004000) received (caused by running kill -3)
If it's a "user" event, then something's asking the JVM to create a dump. If not, and you're still not sure what's causing it, check your jvm.options file for any -Xdump options which can be used to cause the JVM to create a dump in response to certain events. More information on that in the Knowledge Center.
BACKGROUND:
We have rsyslog creating log files directories like: /var/log/rsyslog/SERVER-NAME/LOG-DATE/LOG-FILE-NAME
So multiple servers are spilling out their logs of different dates to a central location.
Now to read these logs and store them in elasticsearch for analysing I have my logstash config file something like this:
file{
path => /var/log/rsyslog/**/*.log
}
ISSUE :
Now as number of log files in the directory increase, logstash opens file descriptors (FD) for new files and will not release FDs for already read log files.
Since log files are generated per date, once it is read, it is of no use after that since it will not be updated after that date.
I have increased the file openings limit to 65K in /etc/security/limits.conf
Can we make logstash close the handle after some time so that number of file handles opened do not increase too much ??
I think you may have hit this bug: http://github.com/elastic/logstash/issues/1604. Do you have the same symptoms? Exceptions in logs after some time? If you run sudo lsof | grep java | wc -l do you see the descriptors steadily increasing over time? (some of them might close, but some will stay open and their number will increase)
I've been tracking this issue for some time, and I don't know that it's properly solved.
We were in a similar boat, perhaps bigger: Logstash couldn't open handles for hundreds of thousands of log files on a box, even though very few of them written to actively. LOGSTASH-271 captured this issue, and there were some attempts to patch Logstash, including PR #1260.
It seems a fix may have made it's way into Logstash 1.5 with PR #1545, but I've never tested this personally. We ended up forking the underlying library Logstash uses to implement the file input, called FileWatch, into FFileWatch, which adds an "eviction mechanism".
The basic idea behind this approach is to only keep files open while they're being written. Normally, Logstash will open a handle on the file and keep it open forever, but FFileWatch adds an option to close the handle if the file has not changed recently (eviction_interval). I then created a custom build of Logstash using the forked gem.
Obviously this is less than ideal, but it worked for us. Eventually we dropped Logstash entirely for picking up log files, although we still use it further down the log processing pipeline. We implemented our own lightweight log shipper (Franz), which does not suffer from this issue.
I'm trying to experiment with using scripts in the config/scripts directory. The Elasticsearch docs here say this:
Save the contents of the script as a file called config/scripts/my_script.groovy on every data node in the cluster:
This seems like it's probably really easy, but I'm afraid I don't understand how exactly to put a groovy file "on every data node in the cluster". Would this normally be done through the command line somehow, or can it be done by manually moving the groovy file (in Finder on OSX for example)? I have a test index, but when I look at the file structure on the nodes I'm confused where to put the groovy file. Help, pretty please.
You just need to copy the file to each server running elasticsearch. If you're just running elasticsearch on your computer then go to the folder you've installed elasticsearch into and add copy the file into config/scripts in there (you may have to create the folder first). Doesn't matter how the file gets there.
You should see an entry in the logs (or the console if you are running in the foreground) along the lines of
compiling script file [/path/to/elasticsearch/config/scripts/my_script.groovy
This won't show up straightaway - by default elasticsearch checks for new/updated scripts every 60 seconds (you can change this with the watcher.interval setting)
Since file scripts are deprecated (elastic/elasticsearch#24552 & elastic/elasticsearch#24555) this aproach is not going to work anymore.
API it's the only way.
I'm using the check_logfiles nagios plugin to monitor Oracle alert logs. It works wonderfully for that purpose.
However I also need to monitor and entire directory of oracle trace logs for errors. This is because the oracle database is always creating new log files with different names.
What I need to know is the best way to scan an entire directory of oracle trace logs to find out which ones match patterns that specify oracle alerts.
Using check logfiles I tried specifying these options -
--criticalpattern='ORA-00600|ORA-00060|ORA-07445|ORA-04031|Shutting
down instance'
and to specify the directory of logs -
--logfile='/global/cms/u01/app/orahb/admin/opbhb/udump/'
and
--logfile="/global/cms/u01/app/orahb/admin/opbhb/udump/*"
Neither of which have any effect. The check runs but returns ok. Does anyone know if this nagios plugin called check_logfiles can monitor a directory of files rather than just a single file? Or perhaps there is another, better way to achieve the same goal of monitoring a bunch of files that can't be specified ahead of time?
Use a script which:
Opens each file
Copies entries which match the pattern
Outputs the matches to a file