Our Spring Boot application generates a log file using logback based on activity within the OpenID Connect authentication module. This file is read by Splunk Universal Forwarder for aggregation on a central server. The "splunk" user is not in the same group as the application user, so an ACL is installed on the log file and its parent directory to allow splunk to read the file.
Just recently the log was rotated by logback as configured. It appears this action creates a new file and renames and compresses the existing one. As a result the ACL on the log file is destroyed, making it impossible for splunk to read it. This raises alarms at the Splunk central server.
Is there any way to make logback re-create the ACL on the log file during rotation, or failing that, to trigger a shell script to do so?
I've not attempted to do anything on this because I don't know what is possible to do within the Spring logback framework. But we need this to happen before the log rotates next time.
ACLs are created from the shell:
setfacl -m user:splunk:r-x spring_audit.log
Related
I am using Springboot version 2.7 and trying to configure the log pattern to be daily rolling.
I am currently using just the application properties file to configure the logging as that's the preference.
I added the following line in the properties file but does not seem to work
logging.logback.rollingpolicy.file-name-pattern=myservice-%d{yyyy-MM-dd}.log
Any clues what I may be missing?
Also, is there a way to check daily log rolling without having to wait for EOD :)
First, you have to specify the file name:
logging.file.name=myservice.log
then you can use the rolling file name pattern
logging.logback.rollingpolicy.file-name-pattern=myservice-%d{yyyy-MM-dd}.log
To force the file change you could set the size to something small
logging.logback.rollingpolicy.max-file-size=100K
To specify the directory you must set this property
logging.file.path=/var/logs
The documentation can be found here:
https://docs.spring.io/spring-boot/docs/current/reference/html/features.html#features.logging
When I run my server which is in websphere, it generates log file in "C:\Softwares\wlp\usr\servers\theory\logs\messages.logs".It has only theory.war in C:\Softwares\wlp\usr\servers\theory\dropins
When I place more than one war files such as theory.war, epic.war, success.war and execute the server it generates single log file in C:\Softwares\wlp\usr\servers\theory\logs\messages.logs
I have to generate separate log files for each war like :
C:\Softwares\wlp\usr\servers\theory\logs\theory.logs
C:\Softwares\wlp\usr\servers\theory\logs\epic.logs
C:\Softwares\wlp\usr\servers\theory\logs\success.logs
Im using util.logging package.
If you want to have separate logfiles you have to either write your custom FileHandler for JUL, or use 3rd party logging framework as Log4j for example.
You could also use binary logging , and then use binaryLog tool to filter messages for given app.
However, I'd STRONGLY recommend NOT doing all that, since in new apps that follow 12 factor principles should put all messages to standard out and standard error steams, what JUL by default does. This is used by containers, container orchestrators and logging infrastructure implemented there. Logging infa allows you to filter log messages based on various params without the need for separate log files. And if you have separate log files you will have to manually integrate that.
I am quite new for log4j2 logger and my requirement to write a log from application server and web server.
I am having two different environment on which J BOSS server is deployed.
Now I am having a log file on web server environment which is writing logs for errors and I want to write logs from application server also in same file.
Please suggest.
If you want the logs to be integrated together you should use a solution like Splunk or Elastic Search/Logstash/Kibana (ELK).
When you try to write to a file from 2 different processes your file will get corrupted unless you use file locking. However, your throughput will decrease significantly and it isn't supported for rolling files. So the best approach is to send the logs to a single process where they can be aggregated.
I have a Windows service application that uses the TopShelf library and I'm installing it in AWS during the cfn-init using the handy command line features that you get with topshelf.
C:\handy_service\> HandyService.exe install start
This basically installs the service in the registry and then calls sc start, but it's quite useful because it checks the service name matches what you expect and it allows you to configure the user that service will run as using the nice fluent API.
The installer code also writes some diagnostic logs to NLog if the service is configured to use NLog in general.
The problem is this: the installer runs as the default local administrator account that the AMI starts with and the NLog file gets created by this user. When the service starts up as the Network Service user, it doesn't have permission to write to the NLog log file.
How can I get my service to write to the log file? I've thought about setting the permissions programmatically but it looks nasty and I'd have to determine the log file name as this is generated dynamically based on the ec2 instance id. Also, it's not entirely obvious at what point the log file is first created. The easiest hack that I might go with is having two NLog.configs and switching one out at the end of the install after the logger is flushed. But because there is some overlap in time between the service starting and the installer exiting, I expect I'll lose a few lines of logging here.
Any clean suggestions would be greatly appreciated!
In the end I went with setting the permissions on the logs folder at deploy time. It's actually pretty straightforward with icacls, only a couple of lines in rake for instance, assuming you know where your logs folder is going to be:
sh %{icacls "#{logs_dir}" /grant "#{username}":(OI)(W)}
Not calling UseNLog() in the service config would also be a simple option, any install-time errors would go in the Windows event log in that case.
I'm looking to start Spring XD in distributed mode (more specifically deploying it with BOSH). How does the admin component communicate to the module container?
If it's via TCP/HTTP, surely I'll have to tell the admin component where all the containers are? If it's via Redis, I would've thought that I'll need to tell the containers where the Redis instance is?
Update
I've tried running xd-admin and Redis on one box, and xd-container on another with redis.properties updated to point to the admin box. The container starts without reporting any exceptions.
Running the example stream submission curl -d "time | log" http://{admin IP}:8080/streams/ticktock yields no output to either console, and not output to the logs.
If you are using the xd-container script, then the redis.properties is expected to be under "XD_HOME/config" where XD_HOME points the base directory where you have bin, config, lib & modules of xd.
Communication between the Admin and Container runtime components is via the messaging bus, which by default is Redis.
Make sure the environment variable XD_HOME is set as per the documentation; if it is not you will see a logging message that suggests the properties file has been loaded correctly when it has not:
13/06/24 09:20:35 INFO support.PropertySourcesPlaceholderConfigurer: Loading properties file from URL [file:../config/redis.properties]