I'm coding a socket file server in C++, and I can't figure out how to put proper unit testing into my makefile. My problem is as follows:
The server, when started, spits out its port number to stdout. It then listens, ad infinitum. The client process (my test suite) needs to read the server's output, and then start up in its own, parallel process.
How can I write a script which will both 1. run the two programs in parallel, 2. allow me to get output from one to the other properly, 3. allow me to store the output in a nice format for later viewing.
It sounds as normal piping should work:
run:
myserver | tee mylog.txt | myclient
Then, the file log.txt includes the output of myserver, i.e. the port number.
If you want to catch your client's output in a file, you can redirect it.
Related
Basically, when I run the command gcloud app logs tail >> logs.txt I would expect it to store the streamed logs in the output file.
I am guessing that since the command executes asynchronously it ends up storing nothing and stops the process.
If so, is there a way to store the logs without the need of a script?
You're not seeing anything because you are redirecting stdout, while gcloud, just like most non-shell programs, prints to stderr. This seems to work fine:
gcloud app logs tail >> logs.txt 2>&1
In any case except for like some temporary thing (getting logs for a couple of minutes), you should be using sinks for this, not some homebrew solution: https://cloud.google.com/logging/docs/export/configure_export_v2. Logs will be stored to GCS every hour.
I have a task for my thesis which includes a camera and several LEDs and they can be controlled by some bash commands. To access the system, I need to run ssh root#IP and access the default path of the system. Under this path there is a script which opens the camera application by running ./foo and once it is executed, I am inside the camera application. Then, I can check the temperature of the LED's etc by typing i.e. status -t
Now my aim is to automatize this process to check the temperature by a bash script or python code. In Bash, If I run i.e ssh root#192.168.0.1, ./foo and status -t consecutively, I can get the temperature value. However, executing ssh root#192.168.0.1 './foo' 'status -t, ends in a infinite loop. If I do ssh root#192.168.0.1 './foo', I expect to be in camera application but this opens the application weirdly such that I can't execute status -t afterwards.
I tried also something like this
ssh root#192.168.0.1 << EOF
ls
./foo
status -t
EOF
refer to
or in python using subprocess ssh using python and with paramiko.
but nothing really works. What actually differs my situation from the rest of these examples is that my commands depend on each other, one opens another application and run the next command in the next application.
So the questions are
1- Does what I am doing make sense and is it even possible?
2- How to apply is in a script/python code?
I received an application code that can be run on Linux kernel 4.4.60 as below cmd - per their app note :-
/usr/sbin/server_application | logger -t tag &
If I run the server_application with just "server_application &" (in the background), then the socket which the process attempts to create fails the initiation. And obviously the client_application (run separately of course) times out.
From my info, the linux logger utilities only make entries in the system log.
Q. -- What is it the application might need which requires the application to log the tag entries in the syslog?
I am trying to reverse engineer as to why it needs logger specifically.
Any thoughts on this would be greatly appreciated.
Thanks in advance.
If you run the server_application in background the process might not have standard output at all opened to anything and any writes to stdout will fail. If you create a pipeline piping the standard output to a program then the server_application will have different characteristics for its stdout.
You could also try to figure out the difference by running these two with strace, for example:
strace -o /tmp/syscall.log /usr/sbin/server_application &
strace -o /tmp/syscall.log /usr/sbin/server_application | logger -t tag &
and by reading the /tmp/syscall.log looking for failed system calls near the end of the run for the former and then comparing them with the calls from the latter.
I'm looking for some help with a script of mine. I'm new at bash scripting and I'm trying to start a service on a remote host with ssh and then capture all the output of this service to a file in my local host. The problem is that I also want to execute other commands after this one:
ssh $remotehost "./server $port" > logFile &
ssh $remotehost "nc -q 2 localhost $port < $payload"
Now, the first command starts an HTTP server that simply prints out any request that it receives, while the second command sends a request to such server.
Normally, if I were to execute the two commands on two separate shells I would get the first response on the terminal, but now I need it on the file.
I would like to have the server output all the requests on the log file, keeping a sort of open ssh connection to receive any new output of the server process.
I hope I made myself clear.
thank you for your help!
EDIT: Here's the output of the first command:
(Output is empty in the terminal... it waits for requests).
As you can see the commands doesn't return anything yet but it waits.
When I execute the second command on a new terminal (the request), the output of the first terminal is the following:
The request is displayed.
Now I would like to execute both commands in sequence in a bash script, sending the output of the first terminal (which is null until the second command is run) to a file so that ANY output, triggered by later issued requests, is sent to a file.
EDIT2: As of now, with the commands above, the server answers any requests but the output is not registered in the log file.
I'm running a Bukkit (Minecraft) server on a Linux machine and I want to have the server gracefully shut down using the server's stop command and the computer suspend at a certain time using pm-suspend from the command line. Here's what I've got:
me#comp~/dir$ perl -e 'sleep [time]; print "stop\\n";' | ./server && sudo pm-suspend
(I've edited by /etc/sudoers so I don't have to enter my password when I suspend.)
The thing is, while the perl -e is sleeping, the server is expecting a constant stream of bytes, (That's my guess. I could be misunderstanding something.) so it prints out all of the nothings it receives, taking up precious resources:
me#comp~/dir$ ...
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>...
Is there any such thing as a buffered pipe? If not, are there any ways to send delayed input to a script?
You may want to have a look at Bukkit's wiki, which recommends an init script for permanently running servers.
This init script uses rather unconventional approach to communicate with running server. The server is started in screen session, then all commands are send to the server console via screen, e.g.
screen -p 0 -S $SCREEN -X eval 'stuff \"stop\"\015'
See https://github.com/Ahtenus/minecraft-init/blob/master/minecraft
This approach suggest that bukkit may be expecting standard input to be attached to a terminal, thus requiring screen wrapper (which is itself terminal emulator) for unattended runs.