grep 5 seconds of input from the serial port inside a shell-script - bash

I've got a device that I'm operating next to my PC and as it runs it's spitting log lines out it's serial port. I have this wired to my PC and I can see the log lines fine if I'm using either minicom or something like:
ttylog -b 115200 -d /dev/ttyS0
I want to write 5 seconds of the device serial output to a temp file (or assign it to a variable) and then later grep that file for keywords that will let me know how the device is operating. I've already tried redirecting the output to a file while running the command in the background, and then sleeping 5 seconds and killing the process, but the log lines never get written to my temp file. Example:
touch tempFile
ttylog -b 115200 -d /dev/ttyS0 >> tempFile &
serialPID=$!
sleep 5
#kill ${serialPID} #does not work, gets wrong PID
killall ttylog
cat tempFile
The file gets created but never filled with any data. I can also replace the ttylog line with:
ttylog -b 115200 -d /dev/ttyS0 |tee -a tempFile &
In neither case do I ever see any log lines logged to stdout or the log file unless I have multiple versions of ttylog running by mistake (see commented out line, D'oh).
I have no idea what's going on here. It seems to be a failure of redirection within my script.
Am I on the right track? Is there a better way to sample 5 seconds of the serial port?

It sounds like maybe ttylog is buffering its output. Have you tried running it with -f or --flush?

You might try the unbuffer script that comes with expect.

ttylog has a --timeout option, where you can simply specify for how many seconds it should run.
So, in your case, you could do
ttylog --baud 115200 --device /dev/ttyS0 --timeout 5
and it would just run for 5 seconds and then stop.
Indeed it also has the -f option as mentioned which flushes, but if you'd use --timeout you would not be needing that.

Related

Mac OS: Script that does something, then starts an Application, then waits until it terminates, and finally does something?

On Mac OS is it possible to create an Automator/Bash/Java/ApplieScript that runs an bash-command to do something (for example chance the screen resolution) after that runs an application (for example a game that needs a specific screen resolution) then waits until the application has been terminated and after that does one final thing (for example change the screen resolution again)?
I tried to work with all Automator, Bash, Java and ApplieScript. I even tried to combine multiple of them to one chain of things that runs other things just to run something else until it terminates and then run something else, but non of that semms to work properly.
I got the terminal commands that changes screen resolution and I also got the terminal command that runs the Game, but I can't bring it together in an logical correct chain of things to happen...
The Commands are:
do shell script "/Volumes/Sierra/Users/xyz/Documents/cscreen -x 1600 -y 900 -r 60"
do shell script "open steam://run/8930"
do shell script "/Volumes/Sierra/Users/xyz/Documents/cscreen -x 1280 -y 720 -r 60"
What you want is the -W argument for open:
-W Causes open to wait until the applications it opens (or that were already open) have exited. Use with the -n
flag to allow open to function as an appropriate app for the $EDITOR environment variable.
So in your example I would make a script like this:
#!/bin/bash
/Volumes/Sierra/Users/xyz/Documents/cscreen -x 1600 -y 900 -r 60
open -W steam://run/8930
/Volumes/Sierra/Users/xyz/Documents/cscreen -x 1280 -y 720 -r 60
Now open should not return control to the shell until steam exits.

Bash writing output to file with using timeout results in error

Im using this script to monitor iBeacon bluetooth devices and it works as expected.
sudo beacon scan -c
However i recently changed it to just run for a few seconds and output the result to a file like so:
sudo timeout 5 beacon scan -c > result.txt
Problem is that this outputs nothing to there is probably an error in the command. Also writing error stream to the file gives me an error.
sudo timeout 5 beacon scan -c &> result.txt
Contents of result.txt:
Set scan parameters failed: Input/output error
It feels like bash is trying to apply &> result.txt as parameters to the beacon scan command. Im not very good at bash so there is probably a simple solution to this problem but i haven't found one!
Some programs designed to be interrupted with ctrl-c don't behave the same when terminated with sigterm, which is what timeout will send by default. Try using the option -s INT to have timeout send sigint instead.

Why does "read -t" block in scripts launched from xcodebuild?

I have a script that creates a FIFO and launches a program that writes output to the FIFO. I then read and parse the output until the program exits.
MYFIFO=/tmp/myfifo.$$
mkfifo "$MYFIFO"
MYFD=3
eval "exec $MYFD<> $MYFIFO"
external_program >&"$MYFD" 2>&"$MYFD" &
EXT_PID=$!
while kill -0 "$EXT_PID" ; do
read -t 1 LINE <&"$MYFD"
# Do stuff with $LINE
done
This works fine reading input while the program is still running, but it looks like the timeout to read is ignored, and read call hangs after the external program exits.
I've used read with a timeout successfully in other scripts, and a simple test script that leaves out the external program times out correctly. What am I doing wrong here?
EDIT: It looks like read -t functions as expected when I run my script from the command line, but when I run it as part of an xcodebuild build process, the timeout does not function. What is different about these two environments?
I don't think -t will work with redirection.
From the man page here:
-t timeout
Cause read to time out and return failure if a complete line
of input is not read within timeout seconds. This option has no
effect if read is not reading input from the terminal or a pipe.

shell script execution sequence

I'm debugging a shell script
so I add set -x at the beginning
a code snippet are as below
tcpdump -i $interface host $ip_addr and 'port 80' -w flash/flash.pcap &
sudo -u esolve firefox /tor_capture/flash.html &
sleep $capture_time
but I noticed that the execution sequence is as below
++ sleep 5
++ sudo -u esolve firefox /tor_capture/flash.html
++ tcpdump -i eth0 host 138.96.192.56 and 'port 80' -w flash/flash.pcap
tcpdump: listening on eth0, link-type EN10MB (Ethernet), capture size 65535 bytes
so the execution sequence is reversed compared to the command sequence in the script
what is wrong with this and how to deal with it?
thanks!
Since those lines are being backgrounded, I think the output from set -x comes from the subshell that is spawned to run the program, and the main shell gets to the sleep command before the subshells have proceeded to the point that they generate the output. That would explain why the sleep command shows up first. With regards to the other two, I would think you might occasionally get them in the other order, as well, since there's no synchronization between the two - depending on how many CPUs you have, how busy the system is, etc., the timing between the subshells is pseudo-non-deterministic...
Do you need the first 2 lines to run as background processes?
If not, remove the & at the end and try again.

Need Help with Unix bash "timer" for mp3 URL Download Script

I've been using the following Unix bash script:
#!/bin/bash
mkdir -p ~/Desktop/URLs
n=1
while read mp3; do
curl "$mp3" > ~/Desktop/URLs/$n.mp3
((n++))
done < ~/Desktop/URLs.txt
to download and rename a bunch mp3 files from URLs listed in "URLs.txt". It works well (thanks to StackOverflow users), but due to a suspected server quantity/time download limit, it's only allowing me to access a range of 40 - 50 files from my URL list.
Is there a way to work around this by adding a "timer" inside the while loop so it downloads 1 file per "X" seconds?
I found another related question, here:
How to include a timer in Bash Scripting?
but I'm not sure where to add the "sleep [number of seconds]"... or even if "sleep" is really what I need for my script...?
Any help enormously appreciated — as always.
Dave
curl has some pretty awesome command-line options (documentation), for example, --limit-rate will limit the amount of bandwidth that curl uses, which might completely solve your problem.
For example, replace the curl line with:
curl --limit-rate 200K "$mp3" > ~/Desktop/URLs/$n.mp3
would limit the transfers to an average of 200K per second, which would download a typical 5MB MP3 file in 25 seconds, and you could experiment with different values until you found the maximum speed that worked.
You could also try a combination of --retry and --retry-delay so that when and if a download fails, curl waits and then tries again after a certain amount of time.
For example, replace the curl line with:
curl --retry 30 "$mp3" > ~/Desktop/URLs/$n.mp3
This will transfer the file. If the transfer fails, it will wait a second and try again. If it fails again, it will wait two seconds. If it fails again, it will wait four seconds, and so on, doubling the waiting time until it succeeds. The "30" means it will retry up to 30 times, and it will never wait more than 10 minutes. You can learn this all at the documentation link I gave you.
#!/bin/bash
mkdir -p ~/Desktop/URLs
n=1
while read mp3; do
curl "$mp3" > ~/Desktop/URLs/$n.mp3 &
((n++))
if ! ((n % 4)); then
wait
sleep 5
fi
done < ~/Desktop/URLs.txt
This will spawn at most 4 instances of curl and then wait for them to complete before it spawns 4 more.
A timer?
Like your crontab?
man cron
You know what they let you download, just count the disk usage of your files that you did get.
There is the transfer you are allowed. You need that, and you will need the PID of your script.
ps aux | grep $progname | print awk '{print $1}'
or something like that. The secret sauce here is that you can suspend with
kill -SIGSTOP PID
and resume with
kill -SIGCONT PID
So the general method would be
Urls on an array or queue or whatever
bash lets you have
Process an url.
increment transfer counter
When transfer counter gets close
kill -SIGSTOP MYPID
You are suspended.
in your crontab foreground your script after a minute/hour/day whatever
Continue processing
Repeat until done.
just don't log out or you'll need to do the whole thing over again, although if you used perl it would be trivial.
Disclaimer, I am not sure if this is an exercise in bash or whatnot, I confess freely that I see the answer in perl, which is always my choice outside of a REPL. Code in Bash long enough , or heaven forbid, Zsh ( my shell ) and you will see why Perl was so popular. Ah memories...
Disclaimer 2: Untested, drive by , garbage methodology here only made possible because you've an idea what that transfer might be. Obviously, if you have ssh , use ssh -D PORT you#host and pull the mp3's out of the proxy half the time.
In my own defense, if you slow pull the urls with sleep you'll be connected for a while. Perhaps "they" might notice that. Suspend and resume and you only should be connected while grabbing tracks, and gone otherwise.
Not so much an answer as an optimization. If you can consistently get the first few URLs, but it times out on the later ones, perhaps you could trim your URL file as the mp3s were successfully received?
That is, as 1.mp3 is successfully downloaded, strip it from the list:
tail url.txt -n +2 > url2.txt; mv -f url2.txt url.txt
Then the next time the script runs, it'll begin from 2.mp3
If that works, you might just set up a cron job to periodically execute the script over and over, taking bites at a time.
It just occurred to me that you're programatically numbering the mp3s, and curl might clobber some of them on restart, since every time it runs it'll start counting at 1.mp3 again. Something to be aware of, if you go this route.

Resources