How can I use go.uber.org/zap lib to print different color with different log level and append log to different file depend on the log level? - go

I started using the zap log library for my Go project. I want to print different colors to tty console based on the log level.
I find the zap/internal/color package can display different colors for strings, but I want to change the log level with different color.
I also want to write the log to some log files with different log level.
How to init and config the zap logger?

Just met the same issue, and here are some code snippets for enable colors:
config := zap.NewDevelopmentConfig()
config.EncoderConfig.EncodeLevel = zapcore.CapitalColorLevelEncoder
logger, _ := config.Build()
logger.Info("Now logs should be colored")
reference: https://github.com/uber-go/zap/pull/307

Related

How to log for different instances on Spring Project <slf4j>?

There is logback.xml file. For example there are 3 instances running, I wanna see 3 log files.
<fileNamePattern>${PATH}/application-%d{yyyy-MM-dd}.%i.log</fileNamePattern>
File pattern is like that above. Whats "%i" ?
How can I log for different instances by using slf4j on Spring Boot project?
The %i is used to number the log files when it log file exceeds its size. For example, using MyLogFile%i.log associated with minimum and maximum values of 1 and 3 will produce archive files named MyLogFile1.log, MyLogFile2.log and MyLogFile3.log. Please refer http://logback.qos.ch/manual/appenders.html#FixedWindowRollingPolicy for more details.
If you want to identify your various instances log files, try adding hostname in name
${PATH}/application-${HOSTNAME}-%d{yyyy-MM-dd}.%i.log
In your code populate HOSTNAME to instance ip address

Are logs included while creating the dot file in gstreamer?

I am trying to understand, how dot file is created from gstreamer logs.
When I generated the gstreamer logs with GST_DEBUG=4 it generated huge number of logs.
At the same time when I check the dot file generated by gstreamer, it has specific information about the pipeline creation. Not the log information after pipeline is created like playing paused seeking...
I have some questions:
What information will be having in dot file when compared to complete log file?
If all the logs are not included in dot file, then how we can debug those log information using dotgraph(using tools like graphviz)?
The dot file is a graphical representation of your complete pipeline, the interconnection of different elements in the pipeline along with the information about the caps negotiation. For eg. When your pipeline grows too large, and you need information about the connection of different elements and the flow of data, usage of dot files will prove useful. Follow this link.
With GST_DEBUG=4, all the logs, warnings, errors of different elements will be outputted. This is particularly useful when you want to understand the lower levels of what is going on inside the elements when the dataflow occurs along the pipeline. You can get information about different events, pad information, buffer information ,etc. Follow this link.
To get more information about a specific element you could also use the following:
GST_DEBUG=<element_name>:4

How do you use logrotate with output redirect?

I'm currently running a ruby script which logs its HTTP traffic to stdout. Since I wanted the logs to be persistent, I redirected the output to a log file with ruby ruby_script.rb >> /var/log/ruby_script.log. However, the logs are now getting very large so I wanted to implement logrotate using the following:
"/var/log/ruby_script.log" {
missingok
daily
rotate 10
dateext
}
However, after running logrotate --force -v ruby_script where "ruby_script" is the name of the logrotate.d configuration file, no new file is created for the script to write to, and it writes to the rotated file instead. I'm guessing this behavior happens because the file descriptor that is passed by >> sticks to the file regardless of moving it, and is unrelated to the filename after the first call. Thus, my question is, what is the correct way to achieve the functionality I'm looking for?
Take a look at option copytruncate.
From man logrotate:
copytruncate: Truncate the original log file to zero size in place after creating a copy, instead of moving the old log file and
optionally creating a new one. It can be used when some program cannot be told to close its logfile and thus might
continue writing (appending) to the previous log file forever. Note that there is a very small time slice between
copying the file and truncating it, so some logging data might be lost. When this option is used, the create option
will have no effect, as the old log file stays in place.

Spark: Silently execute sc.wholeTextFiles

I am loading about 200k text files in Spark using input = sc.wholeTextFiles(hdfs://path/*)
I then run a println(input.count)
It turns out that my spark shell outputs a ton of text (which are the path of every file) and after a while it just hangs without returning my result.
I believe this may be due to the amount of text outputted by wholeTextFiles. Do you know of any way to run this command silently? or is there a better workaround?
Thanks!
How large are your files?
From the wholeTextFiles API:
Small files are preferred, large files are also allowable, but may
cause bad performance.
In conf/log4j.properties, you can suppress excessive logging, like this:
# Set everything to be logged to the console
log4j.rootCategory=ERROR, console
That way, you'll get back only res to the repl, just like in the Scala (the language) repl.
Here are all other logging levels you can play with: log4j API.

Can Ruby's stdlib Logger class safely handle writers from multiple processes?

I'm working on a Ruby library that needs to do logging. Ideally, I'd like multiple worker processes to be able to log into the same file. Looking at the source for the Logger class from Ruby's standard library, I see that efforts are made to synchronize writes into the log from multiple threads (as pointed out in the answer to Is Ruby's stdlib Logger class thread-safe?).
It seems there's a similar problem when multiple processes are writing into the same log file: depending on how the underlying layers decide to buffer / split the writes, each log message may not maintain its integrity.
So, is there a way of using the standard Logger class to allow multiple processes to safely log to a single file? If not, how is this typically accomplished in Ruby projects?
Here's what I mean by 'safely':
Each log line is 'atomic' - appearing in its entirety, uninterrupted before the next message begins. e.g. nothing like [1/1/2013 00:00:00] (PID N) LOGMESS[1/1/2013 00:00:01] (PID M) LOGMESSAGE2\nAGE1
Log messages need not be strictly ordered across processes, so long as the timestamps appearing in the log are correct.
Update:
I decided to take the Tin Man's advice and write a test, which you can find here:
https://gist.github.com/4370423
The short version: Winfield is correct, at least with the default usage of Logger it is safe to use from multiple processes simultaneously (for the definition of 'safe' given above).
The key factor seems to be that if given a file path (instead of an already-open IO object) Logger will open the file with mode WRONLY|APPEND, and set sync=true on it. The combination of these two things (at least on my testing on Mac OS X) appears to make it safe to log concurrently from multiple processes. If you want to pass in an already-open IO object, just make sure you create it in the same way.
Yes, you can safely write interleaved log data to a single logfile in the way you've described.
However, you're better off logging a separate log for each process or using a consolidated logging system like syslog. Here's a couple of reasons why:
Log Rotation/Management: truncating/rolling logfiles is difficult when you have to coordinate signalling multiple processes
Interleaved data can be confusing even if you're injecting the PID to disambiguate
I'm currently managing a number of Resque workers per system with a single logfile and wish I had separated a logfile for each worker. It's been difficult debugging issues and managing the logs properly.

Resources