Normally, in docker/k8s, it's recommended that directly ouput the logs to the stdout.
Then we can use kubectl logs or docker logs to see the logs.
like: time=123 action=write msg=hello world , as a tty, it might be colorized for human friendliness.
However, if we want to export the logs to a log processing center, like EFK (elasticsearch-fluentd-kibana), we need a json-format log file.
like: {"time"=123,"action"="write","msg"="hello world"}
What I want?
Is there a log method that can take into account both human friendliness and json format?
I'm looking for a way that if I use docker logs, then I can get human-readable logs, and at the same time, the log collector can still get the logs in json-format
Conclusion
Thanks for the answer below. I have got 2 methods:
different log format in different env:
1.1 use text-format in developing: docker logs will print colorized and human readable logs.
1.2 use json-format in production: EFK can process json-format well.
log collector's format convertion
2.1 we use text-format, but in log collector like fluentd we can define some scripts to translate text-format kv pair to json-format kv pair.
Kubernetes has such option of structured logging for its system components.
Klog library allows to use --logging-format=json flag that enables to change the format of logs to JSON output - more information about it here and here.
yes you can do that with Flunetd, below are the basic action items that you need to take to finalize this setup
Configure docker container to log to stdout (you can use any format you like)
Configure FluentD to tail docker files from /var/lib/docker/containers/*/*-json.log
parse logs with Flunetd and change the format to JSON logs
output the logs to Elasticsearch.
This article shows exactly how to do this setup also this one explain how to parse Key-Value logs
Related
Log format consist on json encoded in line by line format.
Each line is
{data,payload:/local/path/to/file}
{data,payload:/another/file}
{data,payload:/a/different/file}
the initial idea is configure logstash to use http input, write a java (or anything) daemon that get the file, parse it line by line, replace the payload with the content of file, and send the data to logstash.
I can't modify how the server work, so log format can't be changed.
Logstash machine are different host, so no direct access to files.
Logstash can't mount a shared folder from the server_host.
I can't open port apart a single port for logstash due to compliance of the solution that need ot respect some silly rules that aren't under my control.
Now, to save some times and have a more reliable than a custom-made solution, it's possible to configure filebeat to process every line of json, before sending it to logstash, adding to it
{data,payload:content_of_the_file}
Filebeat won't be able to do advanced transformations of this kind, as it is only meant to forward logs, it can't even do basic string processing like logstash does. I suggest you write a custom script that does this transformation & writes the output to a different file.
You can use filebeat to send the contents of this new file to logstash.
I deployed a go application in google cloud using kubernetes which automatically logs to google stackdriver. Oddly, all log statements are being tagged with severity "ERROR"
For example:
log.Println("This should have log level info")
will be tagged as an error.
Their docs say "Severities: By default, logs written to the standard output are on the INFO level and logs written to the standard error are on the ERROR level."
Anyone know what could be wrong with my setup?
Take a look at this logging package: github.com/teltech/logger, with an accompanying blog post. It will output your logs in a JSON format, including the severity, that is readable by the Stackdriver Fluentd agent.
I am quite new for log4j2 logger and my requirement to write a log from application server and web server.
I am having two different environment on which J BOSS server is deployed.
Now I am having a log file on web server environment which is writing logs for errors and I want to write logs from application server also in same file.
Please suggest.
If you want the logs to be integrated together you should use a solution like Splunk or Elastic Search/Logstash/Kibana (ELK).
When you try to write to a file from 2 different processes your file will get corrupted unless you use file locking. However, your throughput will decrease significantly and it isn't supported for rolling files. So the best approach is to send the logs to a single process where they can be aggregated.
Currently I have a go web application containing over 50 .go files. Each file writes logs on STDOUT for now.
I want to use fluentd to capture these logs and then send them to elasticsearch/kibana.
I search on internet for solution to this. There is one package https://github.com/fluent/fluent-logger-golang .
To use this I would need to change my whole logging related code in each go file.
And there would be many data structures that I would need to Post to fluentd.
Shortly speaking I dont want to use this approach.
Please let me know if there are any other ways to do this.
Thank you
Ideally (at least in my opinion), you would essentially just pipe stdout to Fluentd.
If you happen to be also using Docker for your application you can do this easily using the built in logging drivers:
https://docs.docker.com/engine/admin/logging/overview/
Otherwise, there seem to be a few options to help get stdout to Fluentd:
12Factor App: Capturing stdout/stderr logs with Fluentd
I am trying to configure log shipping/ consolidation using logstash. My tomcat servers run on Windows. I am running into a few problems with my configuration - Tomcat on windows, logging using log4j, redis consolidator/ elasticsearch/ logstash/ kibana running on a single linux server.
Fewer log shippers available on Windows. It looks like nxlog does not work with redis out of the box. So, I have reverted to using logstash to ship. I would like to learn what others prefer to use
Rather use custom appenders I would rather have tomcat use log4j to log to file and then feed the file as input to be shipped to Redis. I don't want to the log formats.
No json-event format for me - http://spredzy.wordpress.com/2013/03/02/monitor-your-cluster-of-tomcat-applications-with-logstash-and-kibana/. I can't seem to get the right file config in the shipper.conf
Any sample config for log4j files - fed to logstash via redis would help.
Thanks
I'm currently writing a Java library to send logs to Logstash using ZeroMQ (no central redis broker required).
Disclaimer: it's not quite perfect yet, but may be worth keeping an eye on.
https://github.com/stuart-warren/logit
You can setup the standard juli log configuration (or log4j if you are using that), plus with the tomcat-valve jar you can send access logs as well by configuring the server.xml.
It does however send it in json-event format by default.
I'm confused as to why you wouldn't want to save all the processing on the Logstash server? You can (and currently probably should) log to file in standard format as well.
logging.properties file.
# "handlers" specifies a comma separated list of log Handler
# classes. These handlers will be installed during VM startup.
# Note that these classes must be on the system classpath.
# By default we only configure a ConsoleHandler, which will only
# show messages at the INFO and above levels.
handlers= com.stuartwarren.logit.jul.ZmqAppender
# handlers= com.stuartwarren.logit.jul.ZmqAppender, java.util.logging.ConsoleHandler
# Default global logging level.
# This specifies which kinds of events are logged across
# all loggers. For any given facility this global level
# can be overriden by a facility-specific level.
# Note that the ConsoleHandler also has a separate level
# setting to limit messages printed to the console.
.level=INFO
# Limit the messages that are printed on the console to INFO and above.
com.stuartwarren.logit.jul.ZmqAppender.level=INFO
com.stuartwarren.logit.jul.ZmqAppender.socketType=PUSHPULL
com.stuartwarren.logit.jul.ZmqAppender.endpoints=tcp://localhost:2120
com.stuartwarren.logit.jul.ZmqAppender.bindConnect=CONNECT
com.stuartwarren.logit.jul.ZmqAppender.linger=1000
com.stuartwarren.logit.jul.ZmqAppender.sendHWM=1000
com.stuartwarren.logit.jul.ZmqAppender.layout=com.stuartwarren.logit.jul.Layout
com.stuartwarren.logit.jul.Layout.layoutType=logstashv1
com.stuartwarren.logit.jul.Layout.detailThreshold=WARNING
com.stuartwarren.logit.jul.Layout.tags=tag1,tag2,tag3
com.stuartwarren.logit.jul.Layout.fields=field1:value1,field2:value2,field3:value3
java.util.logging.ConsoleHandler.level = FINE
java.util.logging.ConsoleHandler.formatter = java.util.logging.SimpleFormatter
server.xml
<Valve className="com.stuartwarren.logit.tomcatvalve.ZmqAppender"
layout="com.stuartwarren.logit.tomcatvalve.Layout"
socketType="PUSHPULL"
endpoints="tcp://localhost:2120"
bindConnect="CONNECT"
linger="1000"
sendHWM="1000"
layoutType="logstashv1"
iHeaders="Referer,User-Agent"
oHeaders=""
cookies=""
tags="tag1,tag2,tag3"
fields="field1:value1,field2:value2,field3:value3" />