Running a server application process along with "logger" - linux-kernel

I received an application code that can be run on Linux kernel 4.4.60 as below cmd - per their app note :-
/usr/sbin/server_application | logger -t tag &
If I run the server_application with just "server_application &" (in the background), then the socket which the process attempts to create fails the initiation. And obviously the client_application (run separately of course) times out.
From my info, the linux logger utilities only make entries in the system log.
Q. -- What is it the application might need which requires the application to log the tag entries in the syslog?
I am trying to reverse engineer as to why it needs logger specifically.
Any thoughts on this would be greatly appreciated.
Thanks in advance.

If you run the server_application in background the process might not have standard output at all opened to anything and any writes to stdout will fail. If you create a pipeline piping the standard output to a program then the server_application will have different characteristics for its stdout.
You could also try to figure out the difference by running these two with strace, for example:
strace -o /tmp/syscall.log /usr/sbin/server_application &
strace -o /tmp/syscall.log /usr/sbin/server_application | logger -t tag &
and by reading the /tmp/syscall.log looking for failed system calls near the end of the run for the former and then comparing them with the calls from the latter.

Related

Is it possible to send google clouds log stream to a local file from command line

Basically, when I run the command gcloud app logs tail >> logs.txt I would expect it to store the streamed logs in the output file.
I am guessing that since the command executes asynchronously it ends up storing nothing and stops the process.
If so, is there a way to store the logs without the need of a script?
You're not seeing anything because you are redirecting stdout, while gcloud, just like most non-shell programs, prints to stderr. This seems to work fine:
gcloud app logs tail >> logs.txt 2>&1
In any case except for like some temporary thing (getting logs for a couple of minutes), you should be using sinks for this, not some homebrew solution: https://cloud.google.com/logging/docs/export/configure_export_v2. Logs will be stored to GCS every hour.

How to run shell script on VM indefinitely?

I have a VM that I want running indefinitely. The server is always running but I want the script to keep running after I log out. How would I go about doing so? Creating a cron job?
In general the following steps are sufficient to convince most Unix shells that the process you're launching should not depend on the continued existence of the shell:
run the command under nohup
run the command in the background
redirect all file descriptors that normally point to the terminal to other locations
So, if you want to run command-name, you should do it like so:
nohup command-name >/dev/null 2>/dev/null </dev/null &
This tells the process that will execute command-name to send all stdout and stderr to nowhere (instead of to your terminal) and also to read stdin from nowhere (instead of from your terminal). Of course if you actually have locations to write to/read from, you can certainly use those instead -- anything except the terminal is fine:
nohup command-name >outputFile 2>errorFile <inputFile &
See also the answer in Petur's comment, which discusses this issue a fair bit.

Run a server and client in a makefile

I'm coding a socket file server in C++, and I can't figure out how to put proper unit testing into my makefile. My problem is as follows:
The server, when started, spits out its port number to stdout. It then listens, ad infinitum. The client process (my test suite) needs to read the server's output, and then start up in its own, parallel process.
How can I write a script which will both 1. run the two programs in parallel, 2. allow me to get output from one to the other properly, 3. allow me to store the output in a nice format for later viewing.
It sounds as normal piping should work:
run:
myserver | tee mylog.txt | myclient
Then, the file log.txt includes the output of myserver, i.e. the port number.
If you want to catch your client's output in a file, you can redirect it.

How can the strace logs of the ever running binary from rcS can be logged?

I am trying to do a profiling on my embedded Linux box. This is running a software.
I want to do a profiling on my software using strace.
The application is the main software that keeps running forever.
How can I run the strace and log the outputs to a file.
In my rcS script.
I run the application like this
./my_app
Now, with strace.
strace ./my_app -> I want to log these outputs on a file, and I should be able to access the file without killing the application. Remember this application never terminates.
Please help!
Instead of a target filename, use the -p option to strace to specify the process ID of an already running process you wish to attach to.
Chris is actually right. Strace takes the -p option, which enables you to attach to a running process just by specifying the processes PID.
Let's say your 'my_app' process runs with PID 2301 (you can see the PID by logging into your device and us 'ps'). Try doing 'strace -p 2301', and you will see all system calls for that PID. You can throw it to a file by redirecting everywhere: 'strace -p 2301 > /tmp/my_app-strace'.
Hope this helps.

Can a standalone ruby script (windows and mac) reload and restart itself?

I have a master-workers architecture where the number of workers is growing on a weekly basis. I can no longer be expected to ssh or remote console into each machine to kill the worker, do a source control sync, and restart. I would like to be able to have the master place a message out on the network that tells each machine to sync and restart.
That's where I hit a roadblock. If I were using any sane platform, I could just do:
exec('ruby', __FILE__)
...and be done. However, I did the following test:
p Process.pid
sleep 1
exec('ruby', __FILE__)
...and on Windows, I get one ruby instance for each call to exec. None of them die until I hit ^C on the window in question. On every platform I tried this on, it is executing the new version of the file each time, which I have verified this by making simple edits to the test script while the test marched along.
The reason I'm printing the pid is to double-check the behavior I'm seeing. On windows, I am getting a different pid with each execution - which I would expect, considering that I am seeing a new process in the task manager for each run. The mac is behaving correctly: the pid is the same for every system call and I have verified with dtrace that each run is trigging a call to the execve syscall.
So, in short, is there a way to get a windows ruby script to restart its execution so it will be running any code - including itself - that has changed during its execution? Please note that this is not a rails application, though it does use activerecord.
After trying a number of solutions (including the one submitted by Byron Whitlock, which ultimately put me onto the path to a satisfactory end) I settled upon:
IO.popen("start cmd /C ruby.exe #{$0} #{ARGV.join(' ')}")
sleep 5
I found that if I didn't sleep at all after the popen, and just exited, the spawn would frequently (>50% of the time) fail. This is not cross-platform obviously, so in order to have the same behavior on the mac:
IO.popen("xterm -e \"ruby blah blah blah\"&")
The classic way to restart a program is to write another one that does it for you. so you spawn a process to restart.exe <args>, then die or exit; restart.exe waits until the calling script is no longer running, then starts the script again.

Resources