How to monitor JournalD for Error/Fatal error messages of some particular process and start some callback if any - systemd-journald

How to monitor JournalD for Error/Fatal error messages of some particular process and start some callback if any. Say if some service - say Nginx posts a log with Error - some script would call. How to make such a script?

You have to use journalctl --unit darts-game-server.service --follow -o json stream and parse the output in your code. There are libs for that as well.

Related

Is it possible to send google clouds log stream to a local file from command line

Basically, when I run the command gcloud app logs tail >> logs.txt I would expect it to store the streamed logs in the output file.
I am guessing that since the command executes asynchronously it ends up storing nothing and stops the process.
If so, is there a way to store the logs without the need of a script?
You're not seeing anything because you are redirecting stdout, while gcloud, just like most non-shell programs, prints to stderr. This seems to work fine:
gcloud app logs tail >> logs.txt 2>&1
In any case except for like some temporary thing (getting logs for a couple of minutes), you should be using sinks for this, not some homebrew solution: https://cloud.google.com/logging/docs/export/configure_export_v2. Logs will be stored to GCS every hour.

Bash: Mailing output from continuosly running command

I got a broker running. It is possible to subscribe to a channel of that broker in order to receive messages that have been published to a channel on this broker.
My objective is to send each received message as email.
My simplyfied command to subscribe to my brooker looks like to this:
brooker subscribe --host myhost -i user -c mychannel
When called from command line i receive messages when they are published as console output (the script keeps running!).
The difficulty lies in capturing the output of my command and feeding it to a mail command.
So far I tried to put the following simplified line into my bash script:
brooker subscribe --host myhost -i user -c mychannel | mail -s "alert" me#my.net
The mailing part has been tested and is functional.
But i do not receive any emails at all when i run my script and publish messages in second console to a channel of the broker.
Question: How can i feed the continuosly incoming messages to my mail command.

Running a server application process along with "logger"

I received an application code that can be run on Linux kernel 4.4.60 as below cmd - per their app note :-
/usr/sbin/server_application | logger -t tag &
If I run the server_application with just "server_application &" (in the background), then the socket which the process attempts to create fails the initiation. And obviously the client_application (run separately of course) times out.
From my info, the linux logger utilities only make entries in the system log.
Q. -- What is it the application might need which requires the application to log the tag entries in the syslog?
I am trying to reverse engineer as to why it needs logger specifically.
Any thoughts on this would be greatly appreciated.
Thanks in advance.
If you run the server_application in background the process might not have standard output at all opened to anything and any writes to stdout will fail. If you create a pipeline piping the standard output to a program then the server_application will have different characteristics for its stdout.
You could also try to figure out the difference by running these two with strace, for example:
strace -o /tmp/syscall.log /usr/sbin/server_application &
strace -o /tmp/syscall.log /usr/sbin/server_application | logger -t tag &
and by reading the /tmp/syscall.log looking for failed system calls near the end of the run for the former and then comparing them with the calls from the latter.

When Kubernetes client-go Remotecommand Stream finishes?

I am using this Remotecommand here https://github.com/kubernetes/client-go/blob/master/tools/remotecommand/remotecommand.go#L108 to execute a command on a pod and stream the result to an io.Writer. As stated in the command above the function in the link the Stream does finish only when client or server disconnect. As the Stream config has only one command attached, why doesn't it close when the command has exited? How can I know when the command has finished?
Particularly I am transferring the result of tar -cf - ... to the client and want to know when its finished.
I noticed that the Stream Function does run synchronously and blocks until the remote command has finished. Adding a one second timeout after calling writer.Close() prevented that the program exited before I could handle the received tar archive.

Kafka in supervisor mode

I'm trying to run kafka in supervision mode so that it can start automatically in case of a shutdown. But all the examples of running kafka use shell scripts and the supervisord is not able to note which PID to monitor. Can anyone suggesthow to accomplish auto restart of kafka?
If you are on a Unix or Linux machine, then this is when /etc/inittab comes in handy. Or you might want to use daemontools. I don't know about Windows though.
We are running Kafka under Supervisord (http://supervisord.org/), it works like a charm. Run command looks like this (as specified in supervisord.conf file:
command=/usr/local/bin/pidproxy /var/run/kafka.pid /usr/lib/kafka/bin/kafka-server.sh -f -p /var/run/kafka.pid
Flag -f tells Kafka to start in foreground. If flag -p is set, Kafka process PID is written into specified file.
The command pidproxy is a part of Supervisord distribution. Upon receiving KILL signal, it reads PID from specified file, and forwards the signal to the corresponding process.

Resources