Bash writing output to file with using timeout results in error - bash

Im using this script to monitor iBeacon bluetooth devices and it works as expected.
sudo beacon scan -c
However i recently changed it to just run for a few seconds and output the result to a file like so:
sudo timeout 5 beacon scan -c > result.txt
Problem is that this outputs nothing to there is probably an error in the command. Also writing error stream to the file gives me an error.
sudo timeout 5 beacon scan -c &> result.txt
Contents of result.txt:
Set scan parameters failed: Input/output error
It feels like bash is trying to apply &> result.txt as parameters to the beacon scan command. Im not very good at bash so there is probably a simple solution to this problem but i haven't found one!

Some programs designed to be interrupted with ctrl-c don't behave the same when terminated with sigterm, which is what timeout will send by default. Try using the option -s INT to have timeout send sigint instead.

Related

bash/python consequtive commands in a nested environment

I have a task for my thesis which includes a camera and several LEDs and they can be controlled by some bash commands. To access the system, I need to run ssh root#IP and access the default path of the system. Under this path there is a script which opens the camera application by running ./foo and once it is executed, I am inside the camera application. Then, I can check the temperature of the LED's etc by typing i.e. status -t
Now my aim is to automatize this process to check the temperature by a bash script or python code. In Bash, If I run i.e ssh root#192.168.0.1, ./foo and status -t consecutively, I can get the temperature value. However, executing ssh root#192.168.0.1 './foo' 'status -t, ends in a infinite loop. If I do ssh root#192.168.0.1 './foo', I expect to be in camera application but this opens the application weirdly such that I can't execute status -t afterwards.
I tried also something like this
ssh root#192.168.0.1 << EOF
ls
./foo
status -t
EOF
refer to
or in python using subprocess ssh using python and with paramiko.
but nothing really works. What actually differs my situation from the rest of these examples is that my commands depend on each other, one opens another application and run the next command in the next application.
So the questions are
1- Does what I am doing make sense and is it even possible?
2- How to apply is in a script/python code?

Why does "read -t" block in scripts launched from xcodebuild?

I have a script that creates a FIFO and launches a program that writes output to the FIFO. I then read and parse the output until the program exits.
MYFIFO=/tmp/myfifo.$$
mkfifo "$MYFIFO"
MYFD=3
eval "exec $MYFD<> $MYFIFO"
external_program >&"$MYFD" 2>&"$MYFD" &
EXT_PID=$!
while kill -0 "$EXT_PID" ; do
read -t 1 LINE <&"$MYFD"
# Do stuff with $LINE
done
This works fine reading input while the program is still running, but it looks like the timeout to read is ignored, and read call hangs after the external program exits.
I've used read with a timeout successfully in other scripts, and a simple test script that leaves out the external program times out correctly. What am I doing wrong here?
EDIT: It looks like read -t functions as expected when I run my script from the command line, but when I run it as part of an xcodebuild build process, the timeout does not function. What is different about these two environments?
I don't think -t will work with redirection.
From the man page here:
-t timeout
Cause read to time out and return failure if a complete line
of input is not read within timeout seconds. This option has no
effect if read is not reading input from the terminal or a pipe.

shell script execution sequence

I'm debugging a shell script
so I add set -x at the beginning
a code snippet are as below
tcpdump -i $interface host $ip_addr and 'port 80' -w flash/flash.pcap &
sudo -u esolve firefox /tor_capture/flash.html &
sleep $capture_time
but I noticed that the execution sequence is as below
++ sleep 5
++ sudo -u esolve firefox /tor_capture/flash.html
++ tcpdump -i eth0 host 138.96.192.56 and 'port 80' -w flash/flash.pcap
tcpdump: listening on eth0, link-type EN10MB (Ethernet), capture size 65535 bytes
so the execution sequence is reversed compared to the command sequence in the script
what is wrong with this and how to deal with it?
thanks!
Since those lines are being backgrounded, I think the output from set -x comes from the subshell that is spawned to run the program, and the main shell gets to the sleep command before the subshells have proceeded to the point that they generate the output. That would explain why the sleep command shows up first. With regards to the other two, I would think you might occasionally get them in the other order, as well, since there's no synchronization between the two - depending on how many CPUs you have, how busy the system is, etc., the timing between the subshells is pseudo-non-deterministic...
Do you need the first 2 lines to run as background processes?
If not, remove the & at the end and try again.

Buffered pipe in bash

I'm running a Bukkit (Minecraft) server on a Linux machine and I want to have the server gracefully shut down using the server's stop command and the computer suspend at a certain time using pm-suspend from the command line. Here's what I've got:
me#comp~/dir$ perl -e 'sleep [time]; print "stop\\n";' | ./server && sudo pm-suspend
(I've edited by /etc/sudoers so I don't have to enter my password when I suspend.)
The thing is, while the perl -e is sleeping, the server is expecting a constant stream of bytes, (That's my guess. I could be misunderstanding something.) so it prints out all of the nothings it receives, taking up precious resources:
me#comp~/dir$ ...
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>...
Is there any such thing as a buffered pipe? If not, are there any ways to send delayed input to a script?
You may want to have a look at Bukkit's wiki, which recommends an init script for permanently running servers.
This init script uses rather unconventional approach to communicate with running server. The server is started in screen session, then all commands are send to the server console via screen, e.g.
screen -p 0 -S $SCREEN -X eval 'stuff \"stop\"\015'
See https://github.com/Ahtenus/minecraft-init/blob/master/minecraft
This approach suggest that bukkit may be expecting standard input to be attached to a terminal, thus requiring screen wrapper (which is itself terminal emulator) for unattended runs.

grep 5 seconds of input from the serial port inside a shell-script

I've got a device that I'm operating next to my PC and as it runs it's spitting log lines out it's serial port. I have this wired to my PC and I can see the log lines fine if I'm using either minicom or something like:
ttylog -b 115200 -d /dev/ttyS0
I want to write 5 seconds of the device serial output to a temp file (or assign it to a variable) and then later grep that file for keywords that will let me know how the device is operating. I've already tried redirecting the output to a file while running the command in the background, and then sleeping 5 seconds and killing the process, but the log lines never get written to my temp file. Example:
touch tempFile
ttylog -b 115200 -d /dev/ttyS0 >> tempFile &
serialPID=$!
sleep 5
#kill ${serialPID} #does not work, gets wrong PID
killall ttylog
cat tempFile
The file gets created but never filled with any data. I can also replace the ttylog line with:
ttylog -b 115200 -d /dev/ttyS0 |tee -a tempFile &
In neither case do I ever see any log lines logged to stdout or the log file unless I have multiple versions of ttylog running by mistake (see commented out line, D'oh).
I have no idea what's going on here. It seems to be a failure of redirection within my script.
Am I on the right track? Is there a better way to sample 5 seconds of the serial port?
It sounds like maybe ttylog is buffering its output. Have you tried running it with -f or --flush?
You might try the unbuffer script that comes with expect.
ttylog has a --timeout option, where you can simply specify for how many seconds it should run.
So, in your case, you could do
ttylog --baud 115200 --device /dev/ttyS0 --timeout 5
and it would just run for 5 seconds and then stop.
Indeed it also has the -f option as mentioned which flushes, but if you'd use --timeout you would not be needing that.

Resources