Automatic log file rotation in GlassFish - rotation

I tried a crazy thing about setting attributes to session for a client request.
for(int i=0; i<i+1; i++) {
session.setAttribute("dumpData" + i , new Object().toString());
System.out.println("$$$$$$$$--- Count = " + i);
}
I just want to know how much the session can take. But while looking into log file for the count of attributes, I found a strange thing log file rotation..., this was also in the log file content. First I searched in the entire project for any file(s) which is logging log file rotation..., but there were no files. So I was wondering whether the GlassFish is automatically rotating the log files. If yes can any body explain or show any useful links with explanation.

Yes, Glassfish is automatically rotating the log files.
From Oracle GlassFish Server Administration Guide
Oracle GlassFish Server by default rotates log files when they reach 2
MB in size. However, you can change the default rotation settings. For
example, you can change the file size at which the server rotates the
log file or you can configure a server to rotate log files based on a
time interval. In addition to changing when rotation occurs, you can
also:
Specify the maximum number of rotated files that can accumulate.
By default, Oracle GlassFish Server does not limit the number of
rotated log files that are retained. However, you can set a limit.
After the number of log files reaches this limit, subsequent file
rotations delete the oldest rotated log file.
Rotate the log file manually.
A manual rotation forces the immediate rotation of the target log
file.
Have a look at /GLASSFISH_FOLDER/glassfish/domains/domain1/config/logging.properties. It should be easy to understand. For detailed explanations have a look at the Glassfish Server Administration Guide.
See also:
Turn off Glassfish Log Rotation

Related

PostgreSQL Log Rotation Size Reached File Limit

I have configured the following settings in postgreSQL 13.
logging_collector = on
log_rotation_size='100MB'
log_truncate_on_rotation = on
log_filename ='postgresql-%Y-%m-%d.log'
My issue is when the log file size reached 100MB, it will continue to append on it, I think it is because of the log_filename. Is there anyway I can rename the filename when it reached the log_rotation_size?
I need to set the log_filename with this format (without the time) so that whenever I restart the service, the log will still be in the same log file.
Do I have to run some script or services on the background so that the program is able to monitor the data/logs folder and rename the file when the log file size reaches the limit?
As the documentation says:
However, truncation will occur only when a new file is being opened due to time-based rotation, not during server startup or size-based rotation.
Truncating the log file in your case would mean to lose recent log information, so PostgreSQL won't do it.
I can think of no better way than a cron job that removes the log file when it approaches the limit. Then size based log rotation will create the file again.

derby.stream.error.rollingFile.limit property in Derby.properties is not working as rollover is not working as expected

Need help on the below configuration not reflecting in derby.properties.
Kindly have a look on this and see if you can help us on resolving/shed light on this.
"derby.stream.error.rollingFile.limit=5120000 " in derby.properties doesn't perform rotation even after 5MB file size is reached.It keeps growing as much as it can.
But other parameters like "derby.infolog.append=true " is getting reflected as i could derby.log getting appended instead of new log creation.
Rolling log is implemented in version 10.11.1.1 and above. https://db.apache.org/derby/releases/release-10.11.1.1.cgi
You are most likely using an older version.

Are logs included while creating the dot file in gstreamer?

I am trying to understand, how dot file is created from gstreamer logs.
When I generated the gstreamer logs with GST_DEBUG=4 it generated huge number of logs.
At the same time when I check the dot file generated by gstreamer, it has specific information about the pipeline creation. Not the log information after pipeline is created like playing paused seeking...
I have some questions:
What information will be having in dot file when compared to complete log file?
If all the logs are not included in dot file, then how we can debug those log information using dotgraph(using tools like graphviz)?
The dot file is a graphical representation of your complete pipeline, the interconnection of different elements in the pipeline along with the information about the caps negotiation. For eg. When your pipeline grows too large, and you need information about the connection of different elements and the flow of data, usage of dot files will prove useful. Follow this link.
With GST_DEBUG=4, all the logs, warnings, errors of different elements will be outputted. This is particularly useful when you want to understand the lower levels of what is going on inside the elements when the dataflow occurs along the pipeline. You can get information about different events, pad information, buffer information ,etc. Follow this link.
To get more information about a specific element you could also use the following:
GST_DEBUG=<element_name>:4

How to enable GC logging for Hadoop MapReduce2 History Server, while preventing log file overwrites and capping disk space usage

We recently decided to enable GC logging for Hadoop MapReduce2 History Server on a number of clusters (exact version varies) as a aid to looking into history-server-related memory and garbage collection problems. While doing this, we want to avoid two problems we know might happen:
overwriting of the log file when the MR2 History server restarts for any reason
the logs using too much disk space, leading to disks getting filled
When Java GC logging starts for a process it seems to replace the content of any file that has the same name. This means that unless you are careful, you will lose the GC logging, perhaps when you are more likely to need it.
If you keep the cluster running long enough, log files will fill up disk unless managed. Even if GC logging is not currently voluminous we want to manage the risk of an unusual situation arising that causes the logging rate to suddenly spike up.
You will need to set some JVM parameters when starting the MapReduce2 History Server, meaning you need to make some changes to mapred-env.sh. You could set the parameters in HADOOP_OPTS, but that would have a broader impact than just the History server, so instead you will probably want to set them in HADOOP_JOB_HISTORYSERVER_OPTS.
Now lets discuss the JVM parameters to include in those.
To enable GC logging to a file, you will need to add -verbose:gc -Xloggc:<log-file-location>.
You need to give the log file name special consideration to prevent overwrites whenever the server is restarted. It seems like you need to have a unique name for every invocation so appending a timestamp seems like the best option. You can include something like `date +'%Y%m%d%H%M'` to add a timestamp. In this example, it is in the form of YYYYMMDDHHMM. In some versions of Java you can put "%t" in your log file location and it will be replaced by the server start up timestamp formatted as YYYY-MM-DD_HH-MM-SS.
Now onto managing use of disk space. I'll be happy if there is a simpler way than what I have.
First, take advantage of Java's built-in GC log file rotation. -XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=10 -XX:GCLogFileSize=10M is an example of enabling this rotation, having up to 10 GC log files from the JVM, each of which is no more than approx 10MB in size. 10 x 10MB is 100MB max usage.
With the GC log file rotation in place with up to 10 files, '.0', '.1', ... '.9' will be added to the file name you gave in Xloggc. .0 will be first and after it reaches .9 it will replace .0 and continue on in a round robin manner. In some versions of Java '.current' will be additionally put on the end of the name of the log file currently being written to.
Due to the unique file naming we apparently have to have to avoid overwrites, you can have 100MB per History server invocation, so this is not a total solution to managing disk space used by the server's GC logs. You will end up with a set of up to 10 GC log files on each server invocation -- this can add up over time. The best solution (under *nix) to that would seem to be to use the logrotate utility (or some other utility) to periodically clean up the GC logs that have not been modified in the last N days.
Be sure to do the math and make sure you will have enough disk space.
People frequently want more details and context in their GC logs than the default, so consider adding in -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+PrintGCDateStamps.
Putting this together, you might add something this to mapred-env:
## enable GC logging for MR2 History Server:
TIMESTAMP=`date +'%Y%m%d%H%M'`
# GC log location/name prior to .n addition by log rotation
JOB_HISTORYSERVER_GC_LOG_NAME="{{mapred_log_dir_prefix}}/$USER/mapred-jobhistory-gc.log-$TIMESTAMP"
JOB_HISTORYSERVER_GC_LOG_ENABLE_OPTS="-verbose:gc -Xloggc:$JOB_HISTORYSERVER_GC_LOG_NAME"
JOB_HISTORYSERVER_GC_LOG_ROTATION_OPTS="-XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=10 -XX:GCLogFileSize=10M"
JOB_HISTORYSERVER_GC_LOG_FORMAT_OPTS="-XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+PrintGCDateStamps"
JOB_HISTORYSERVER_GC_LOG_OPTS="$JOB_HISTORYSERVER_GC_LOG_ENABLE_OPTS $JOB_HISTORYSERVER_GC_LOG_ROTATION_OPTS $JOB_HISTORYSERVER_GC_LOG_FORMAT_OPTS"
export HADOOP_JOB_HISTORYSERVER_OPTS="$HADOOP_JOB_HISTORYSERVER_OPTS $JOB_HISTORYSERVER_GC_LOG_OPTS"
You may find that you already have a reference to HADOOP_JOB_HISTORYSERVER_OPTS so you should replace or add onto that.
In the above, you can change {{mapred_log_dir_prefix}}/$USER to wherever you want the GC logs to go (you probably want it to go the the same place as MapReduce history server logs). You can change the log file naming too.
If you are managing your Hadoop cluster with Apache Ambari, then these changes would be in MapReduce2 service > Configs > Advanced > Advanced mapred-env > mapred-env template. With Ambari, {{mapred_log_dir_prefix}} will be automatically replaced with the Mapreduce Log Dir Prefix defined a few rows above the field.
GC logging will start happening upon server restart the server, so you may need to have a short outage to enable this.

h2.db file size difference

I have an application that generates an H2 database.
When I execute the application on Windows XP it generates an .h2.db file with size 176K, but when I execute the same application on Unix (SunOS) it generates an .h2.db file with size 1126K, although they contain exactly the same data.
Can anyone explain what might be causing the UNIX generated file to be so much larger?
Thanks!
Martin
The easiest way to shrink the database file in this case is to open and close it. An alternative is to run the statement shutdown compact
In your case, the "Unix" database is not fully compacted, that means it contains empty pages in the database file (the empty pages where most likely temporarily contained the transaction log; this is normal). When closing the database, H2 will try to compact the database file by moving unused pages to the end of the file and then truncating the file. The default compact time is 0.2 seconds. Probably this 0.2 seconds were not quiet enough to fully compact the database in case of the "Unix" platform, but enough for the "Windows" platform.

Resources