Does kubespray install any log rotation policies? - kubespray

I've looked through the GitHub (https://github.com/kubernetes-sigs/kubespray) project but I don't see references to log rotation.
Is there a standard approach?

kube-apiserver handles its own log rotation for the audit log using these parameters:
- --audit-log-path=/var/log/apiserver/audit.log
- --audit-log-maxbackup=10
- --audit-log-maxage=30
- --audit-log-maxsize=100
If you are interested in using an audit log, the description at https://medium.com/faun/kubernetes-on-premise-cluster-auditing-eb8ff848fec4 is very good.

Related

DEFINE A LEVEL OF LOGS IN DATASTAGE

when I want to see the logs of a datastage job it gave me only the logs with the info level logs
I want to define a new level to see the logs please
Any help
Thanks in advance
Datastage knows three level of log entries - as you mentioned already:
Info
Warning
Error
They will be shown if they appear - so you do not have to do anything for it (as your question asumes)
You can "demote" a warning to an informational message or promote one i.e. from an information to a warning.
EDIT:
To do this you can define so called Messgae Handlers
his can be done on Job level or Project level (or Runtime handler)
In DataStage Director you eed to follow these steps: Menu - Tools - Message Handler Management
There are lots of descriptions out there - just search for DataStage Message Handler

Google Cloud Data flow jobs failing with error 'Failed to retrieve staged files: failed to retrieve worker in 3 attempts: bad MD5...'

SDK: Apache Beam SDK for Go 0.5.0
We are running Apache Beam Go SDK jobs in Google Cloud Data Flow. They had been working fine until recently when they intermittently stopped working (no changes made to code or config). The error that occurs is:
Failed to retrieve staged files: failed to retrieve worker in 3 attempts: bad MD5 for /var/opt/google/staged/worker: ..., want ; bad MD5 for /var/opt/google/staged/worker: ..., want ;
(Note: It seems as if it's missing a second hash value in the error message message.)
As best I can guess there's something wrong with the worker - It seems to be trying to compare md5 hashes of the worker and missing one of the values? I don't know exactly what it's comparing to though.
Does anybody know what could be causing this issue?
The fix to this issue seems to have been to rebuild the worker_harness_container_image with the latest changes. I had tried this but I didn't have the latest release when I built it locally. After I pulled the latest from the Beam repo, and rebuilt the image (As per the notes here https://github.com/apache/beam/blob/master/sdks/CONTAINERS.md) and reran it seemed to work again.
I'm seeing the same thing. If I look into the Stackdriver logging I see this:
Handler for GET /v1.27/images/apache-docker-beam-snapshots-docker.bintray.io/beam/go:20180515/json returned error: No such image: apache-docker-beam-snapshots-docker.bintray.io/beam/go:20180515
However, I can pull the image just fine locally. Any ideas why Dataflow cannot pull.

Daily rolling log file in Websphere Liberty (16.0.0.4-WS-LIBERTY-CORE)

How to create a daily rolling log file in Websphere Liberty? I want the name of the log file to have YYYYMMDD format.
Currently I'm only able to limit the max file size, max file and a static naming of messages.log and disable consolelog.
<logging consoleLogLevel="OFF" maxFileSize="1" maxFiles="3" messageFileName="loggingMessages.log"/>
https://www.ibm.com/support/knowledgecenter/SSEQTP_8.5.5/com.ibm.websphere.wlp.doc/ae/rwlp_logging.html
WebSphere Liberty does not currently have the ability to schedule log file rotation like traditional WAS. You can request this feature using the RFE site.
Alternatively, you could use an approach like Bruce mentioned - perhaps using a cron job to restart the server at midnight.
You might also consider configuring Liberty's binary logging. This will create a binary log file that can be queried to produce actual log files (with filtering options, etc.). It does have some time-based options. More info here.
Hope this helps, Andy
Probably not the answer you want, but if you restart the server it will roll the log.

Dynamics CRM 2013: Audit logs not showing details

I have come across an odd error I have never encountered before in the audit logs:
As you can see there are no details displayed but rather just an image
When I click on a specific line I only get this:
I haven't the faintest idea of where to start looking
I am a system administrator in the system so I don't think it's a permissions issue.
I can also confirm that there are real values in the system where the audit logs display that image instead of a value.
It seems to be a widespread issue across almost every entity in the system too.
EDIT:
I have looked in the audit management logs and found this:
I guess this indicates that no logs have been deleted?
This occurs when auditing was switched off and on again. From that moment the system cannot guarantee that the audit trail shown is complete and therefore it displays a torn page symbol.

Laravel logging treshhold

I am configuring monolog logging in laravel and I am wondering if there is a way to specify threshold for log file? For example in log4php you have maxFileSize property. Is there some way to do it with monolog?
(Check this for how to configure custom monolog: Configuring Custom Logging in Laravel 5 )
According to documentation laravel supports out of box only single, daily, syslog and error log logging modes.
I am wondering if is there way to use something between single and daily? I do not want to have daily log files and also do not like idea to have one big file. I would like to have possibility to specify threshold. For example 20 Mb and when this size is reached then to create new log file.
Does anybody has solution for that?
Use a proper log rotation facility that is available in your OS of choice for that.
On linux machine - logrotate
On Mac OS X - newsyslog
RotatingFileHandler in the monolog package, which Laravel uses for logging, intended to be just as a workaround.
Stores logs to files that are rotated every day and a limited number of files are kept.
This rotation is only intended to be used as a workaround. Using logrotate to
handle the rotation is strongly encouraged when you can use it.

Resources