Logs in journald and syslog - systemd

I redirect the logs of journald to syslog with the option ForwardToSyslog.
Although I would like keeping my logs in journald and having them in syslog.
Is it possible ?

ForwardToSyslog= is documented in man journald.conf. It doesn't redirect logs to syslog, it copies the files there.
Do you have some indication that your logs send to syslog aren't in the systemd journal?

Related

why logstash produce logs in /var/log with messages-20220731 naming convention?

as I know logstash log path is /var/log/logstash.but logstash produce tons of log witwith messages-date naming convention.how to stop producing this logs ?

How to stop logstash to write logstash logs to syslog?

I have my logstash configuration in my ubuntu server which reads data from the postgres database and send the data to elastic search. I have configured a schedule at each 15 minutes the logstash will look the postgres table, if there is any change in the table it sends the data to elastic search.
But each time the logstash is also sending the logs to syslog which I does not need. Because of logstash my syslog file consumes more memory.
So how to stop logstash to send its logs to syslog. Is there is any configuration in logstash.yml to avoid sending logs to syslog.
I referred many sites in online in which they said to remove below line from the configuration.
stdout { codec => rubydebug }
But I don't have this line.
In my output I just send my data to elastic search which I brought from AWS.
Is there is a way to stop logstash to sending its logs to syslog?
disable the rootLogger.appendRef.console in log4j
The logfiles that logstash itself produces are created through log4j, one stream goes by default to the console. Syslog will write to consolelogs to the syslog file itself. In the Ubuntu version of logstash this is configured in the file name/etc/logstash/log4j2.properties
In the default configuration there is a line that starts with
rootLogger.appenderRef.console
If you add a # in front of the line and restart logstash. The logfiles that logstash creates will stop going to syslog
service logstash restart
The other rootLogger that uses the RollingFileAppender should still write logmessages from logstash itself (so not the messages that are being processed by your pipeline) to
/var/log/logstash/logstash-plain.log
It's easy to confuse the logfiles that logstash creates with the messages that you process, especially if they get mixed by the logstash-output-stdout or logstash-output-syslog plugins. This is not applicable to you because you use the logstash-output-elasticsearch plugin that writes to elasticsearch.
The log4j.properties file gets skipped if you run logstash from the commandline, in Ubuntu. It's a nice way of testing your pipeline in a terminal, you can run multiple logstash instances in parallel (e.g. the service and a commandline test pipeline)
/usr/share/logstash/bin/logstash -f your_pipeline.conf
To avoid write to syslog, check your pipelines and log4j.properties files.
In your pipelines files, remove all occurences of this :
stdout { codec => rubydebug }
And in your log4j.properties files comment this line :
rootLogger.appenderRef.console

How logs printed directly onto the console in yarn-cluster mode using spark

I am new in spark and i want to print logs on console using apache spark in yarn cluster mode.
You need to check the value in log4j.properties file. In my case i have this file in /etc/spark/conf.dist directory
log4j.rootCategory=INFO,console
INFO - prints the all the logs on the console. You can change the value to ERROR, WARN to limit the information you would like to see on the console as sparks logs can be overwhelming

Zookeeper & Kafka error KeeperErrorCode=NodeExists

I have written a kafka consumer and producer that worked fine until today.
This morning, when I started zookeeper and kafka, my consumer was not able to read messages, and I found this in the zookeeper logs:
INFO Got user-level KeeperException when processing sessionid:0x151c41e62e10000
type:create cxid:0x2a zxid:0x1e txntype:-1 reqpath:n/a
Error Path:/brokers/ids
Error:KeeperErrorCode = NodeExists for /brokers/ids
(org.apache.zookeeper.server.PrepRequestProcessor)
Look for log.dirs in your server.properties file and delete all the Kafka and zookeeper logs from there and try restarting zookeeper and Kafka respectively. I was facing the same issue and doing this resolved it.
According to Confluent at https://groups.google.com/forum/#!topic/confluent-platform/h0gEik_Ii1E on 2016/10/08
Those are not errors, you can see the log level is INFO. It is simply
logging that Kafka tried to create a node that already exists. Totally
normal behavior for Kafka and nothing to worry about.
Is there an actual problem related to the message or is everything working correctly?
go to Kafka root directory and look for the logs file. and clear all logs. For instance:
say your kafka is installed in the downloads folder:
cd ~/Downloads/kafka_2.13-2.6.0
rm -rf logs
It will resolve the issue.

View Log from Map/Reduce Task

I know that i can find the map/reduce task log inside: /usr/local/hadoop/logs/userlogs/.
Are there a friendly way to see it?
For example, when i clicked http://127.0.0.1:8088/cluster/, I can see all jobs executed in the cluster. Then i clicked in a FINISHED job. But now, when i try to click in Tracking URL: History it gives me an error, Why can i see the task logs from here?
I would like to see the stderr, stdout and syslog from each task.
Try using Job Browser from HUE
or Use the command
yarn logs -applicationId [OPTIONS]
general options are:
-appOwner AppOwner (assumed to be current user if
not specified)
-containerId ContainerId (must be specified if node
address is specified)
-nodeAddress NodeAddress in the format nodename:port
(must be specified if container id is
specified)
Example: yarn logs -applicationId application_1414530900704_0007

Resources