I am working with GLOG in one of our components.
My company enforce a specific log format.
Is there a way to change the GLOG log format?
(couldn't find any API for that)
Related
Hi there I need to change format of parquet file to csv using only Logic app native tools. Is that even possible?
I did research of similar issues, I found how to use Azure Functions to change format, but it's not native Logic App tool.
There's a custom connector that will transform Parquet to Json for you.
It will also allow you to perform filter and sorting operations on the data prior to it being returned.
Documentation can be found here ... https://www.statesolutions.com.au/parquet-to-json/
My goal is to include my stacktrace and log message into a single log message for my Spring Boot applications. The problem I'm running into is each line of the stacktrace is a separate log message. I want to be able to search my logs for a log level of ERROR and find the log message with the stacktrace. I've found two solutions but not sure which to use.
I can use Logback to put them all in one line but would like to keep the new lines for a pretty format. Also the guide I found might override defaults that I want to keep. https://fabianlee.org/2018/03/09/java-collapsing-multiline-stack-traces-into-a-single-log-event-using-spring-backed-by-logback-or-log4j2/
I could also use ECS and concatenate it there, but it could affect other logs (though I think we only have Java apps). https://docs.aws.amazon.com/AmazonECS/latest/developerguide/firelens-concatanate-multiline.html
Which would be the best way to do it? Also is there a better way to do it in Spring compared to the guide that I found?
In Go, you can specify a specific zoneinfo.zip file to be used by adding a ZONEINFO environment variable that points to the specific file you'd like to use for time zone information. This is great as it allows me to ensure that the versions of the IANA time zone database that I'm using on my front-end and my back-end are the same.
However, there does not appear to be any way to detect if use of the specified zone info file has failed. Looking at the source code (https://golang.org/src/time/zoneinfo.go), it looks like any errors using the specified file will fail quietly and then go will proceed to check for default OS locations or the default $GOROOT location to pull time zone information from there. This is not the behavior that I would prefer as I would like to know with certainty that I am using my specified version of zone info.
I've thought of the following in terms of solutions, but I'm not happy with any of them.
I can check that the environment variable is set myself, but this is at best a partial solution as it doesn't tell me if the file is actually usable by LoadLocation.
I can ensure none of the backup locations for zone info exist. This seems a bit extreme and means that I have to be extremely careful about the environment that the code is running in in both dev and production settings.
Does anyone know of a more elegant way to ensure that I am using the zoneinfo.zip file specified by my ZONEINFO environment variable?
Update: To address this problem I too inspiration from #chuckx's answer below and put together a Go package that takes the guess work out of which time zone database is being used. Included in the readme are instructions on how to get the correct version of the time zone database using a Go installation.
Maybe consider not relying on the environment variable?
If you're not averse to distributing the unzipped fileset, you can easily use LoadLocationFromTZData(name string, data []byte). The second argument is the contents of an individual timezone file.
For reference, the functionality for processing a zip file is found in the unexported function loadTzinfoFromZip().
Step-by-step approach extracted from #Slotherooo's comment:
Make a local version of time.loadTzinfoFromZip(zipfile, name string) ([]byte, error)
Use that method to extract the []byte for the desired location from a timeinfo.zip file
Use time.LoadLocationFromTZData() exclusively instead of time.LoadLocation
How to create a daily rolling log file in Websphere Liberty? I want the name of the log file to have YYYYMMDD format.
Currently I'm only able to limit the max file size, max file and a static naming of messages.log and disable consolelog.
<logging consoleLogLevel="OFF" maxFileSize="1" maxFiles="3" messageFileName="loggingMessages.log"/>
https://www.ibm.com/support/knowledgecenter/SSEQTP_8.5.5/com.ibm.websphere.wlp.doc/ae/rwlp_logging.html
WebSphere Liberty does not currently have the ability to schedule log file rotation like traditional WAS. You can request this feature using the RFE site.
Alternatively, you could use an approach like Bruce mentioned - perhaps using a cron job to restart the server at midnight.
You might also consider configuring Liberty's binary logging. This will create a binary log file that can be queried to produce actual log files (with filtering options, etc.). It does have some time-based options. More info here.
Hope this helps, Andy
Probably not the answer you want, but if you restart the server it will roll the log.
I am configuring monolog logging in laravel and I am wondering if there is a way to specify threshold for log file? For example in log4php you have maxFileSize property. Is there some way to do it with monolog?
(Check this for how to configure custom monolog: Configuring Custom Logging in Laravel 5 )
According to documentation laravel supports out of box only single, daily, syslog and error log logging modes.
I am wondering if is there way to use something between single and daily? I do not want to have daily log files and also do not like idea to have one big file. I would like to have possibility to specify threshold. For example 20 Mb and when this size is reached then to create new log file.
Does anybody has solution for that?
Use a proper log rotation facility that is available in your OS of choice for that.
On linux machine - logrotate
On Mac OS X - newsyslog
RotatingFileHandler in the monolog package, which Laravel uses for logging, intended to be just as a workaround.
Stores logs to files that are rotated every day and a limited number of files are kept.
This rotation is only intended to be used as a workaround. Using logrotate to
handle the rotation is strongly encouraged when you can use it.