How to exit minicom via scripting - bash

I have a minicom script which sends some commands via serial and expects something back(which works) but im having trouble exiting the minicom screen.
Below is the minicom script:
success1:
print \nSuccessfully received running!
send "exit"
exit 0
success2:
print \nSuccessfully received degrading!
! killall -9 minicom
exit
I was using ! killall -9 minicom which is recommended on their documentation but unfortunately, when running the script on Jenkins, it fails due to exit code 137 (Another process sent a signal 9). However this does exit minicom, just not successfully.
On the other hand, the 'send "exit"' just logs out of the device, and doesnt exit minicom.
How can i exit minicom and receive a 0 exit code?

you need to feed <stdin> with three characters: <ctrl-A>x<enter>
prepare the file escape.txt using vi in order to write ^Ax^M
launch minicom script
/bin/rm -f capture.txt; ( minicom -D /dev/ttyUSB0 -S test_minicom.macro -C capture.txt < escape.txt ) ; cat capture.txt

To build on what Diego shared, if you just need to exit minicom without error and don't care about capturing the exit code, build escape.txt as Diego described then you only need to run:
( minicom -D /dev/ttyUSB0 -S test_minicom.macro -C capture.txt < escape.txt )
This proves to be very helpful for automatic provisioning like with Ansible!

As an alternative to creating the escape.txt file, you can use echo to send the exit sequence.
Building on the above answers:
$ /bin/rm -f capture.txt; ( echo -ne "\x01x\r" ) | minicom -D /dev/ttyUSB0 -S test_minicom.macro -C capture.txt; cat capture.txt
To break down the echo command a little,
-n removes the default line-feed \n
-e tells echo to interpret escape sequences
\x01 is the escape sequence for ^A (start-of-heading)
x tells minicom to exit
\r sends a ^M carriage-return
Hex output from echo:
$ echo -ne "\x01x\r" | od -A x -t x1a -v
000000 01 78 0d
soh x cr
000003
Note: If you just want to send some text and don't need the full minicom scripting you can add an extra echo. Sleep may not be needed depending on command run and if you care about output in capture.txt...
$ /bin/rm -f capture.txt; ( echo "poweroff"; sleep 1; echo -ne "\x01x\r" ) | minicom -D /dev/ttyUSB0 -C capture.txt; cat capture.txt

Related

Remove display pid in bash when runing background

If we run process in background we see process pid and output:
# echo cho &
cho
19078
Is it possible to make:
# echo cho &
cho
Why I need this?
I want to write simple inline LAN-scanner with only pings for some PC which have no utilities like nmap or arp-scan.
for ip in 192.168.1.{1..254}; do (ping -c 1 -t 1 $ip > /dev/null && echo ">>> ${ip} is up"; ) & done
It works but PIDs spoil output.
(echo cho &)
In loop:
for ip in 192.168.23.{1..254}; do (ping -c 1 -t 1 $ip > /dev/null && echo ">>> ${ip} is up" &) done
I’d just run the for loop itself as a single job in the background. There’s also no need to use parentheses to run any commands in a subshell (with Bash, using the & control operator automatically creates a subshell to run the commands). The less processes that are forked within the loop, the quicker it will run.
for ip in 192.168.1.{1..254}; do ping -c 1 -t 1 $ip > /dev/null &&
echo ">>> ${ip} is up"; done &
If you don’t want any job control feedback to be printed to screen, you can enclose the backgrounded loop in parentheses so that it runs within in another subshell level:
( for ip in 192.168.1.{1..254}; do ping -c 1 -t 1 $ip > /dev/null &&
echo ">>> ${ip} is up"; done & )
A better solution would be to redirect the output of the echo statements to a file and keep the job control output so that the shell can notify you when the loop has finished. You can keep using your shell and avoid having the terminal getting cluttered with output printed by the loop running in the background.
for ip in 192.168.1.{1..254}; do ping -c 1 -t 1 $ip > /dev/null
&& echo ">>> ${ip} is up"; done > hosts_up &
Note: The above commands can be run as one-liners but I use two lines here to avoid horizontal scrolling (&& at the end of a line means the rest of the command continues on the following line).

When tee command redirect to subshell, the last two lines missing

I have below solution to record a command and its output executed on a remote machine:
rexec:// -t -t /usr/bin/ssh -q -x -o StrictHostKeyChecking=no -2 \
-l ${SSHUserName} -p 22 ${mainHost} \
| tee >(/opt/oss/clilogging/bin/clilogging.sh para1 para2)
clilogging.sh will record each command and its output into a log file.
However, sometimes the last exited command and its output message "logout" is not written into the log file.
clilogging.sh is as follows:
#!/bin/bash
{
while read R || [ -n "$R" ];do
#e.g. 2013-08-19T09:58:08+0300
timestamp=`date +%FT%T%z`;
echo $timestamp $R;
done
} > /tmp/xxx.log
Could anybody help me?
Thanks a lot!
Thanks thom's comment and thank you all.
I have found the solution of this issue.
Need add following code at the begining of clilogging.sh
trap "" HUP
The meaning of code is to handle SIGHUP signal, here I ignore this signal, then clilogging.sh
will not quit immediately and have the chance to handle all buffer.
man 7 signal
Signal Value Action Comment
-------------------------------------------------------------------------
SIGHUP 1 Term Hangup detected on controlling terminal
or death of controlling process

create read/write environment using named pipes

I am using RedHat EL 4. I am using Bash 3.00.15.
I am writing SystemVerilog and I want to emulate stdin and stdout. I can only use files as the normal stdin and stdout is not supported in the environment. I would like to use named pipes to emulate stdin and stdout.
I understand how to create a to_sv and from_sv file using mkpipe, and how to open them and use them in SystemVerilog.
By using "cat > to_sv" I can output strings to the SystemVerilog simulation. But that also outputs what I'm typing in the shell.
I would like, if possible, a single shell where it acts almost like a UART terminal. Whatever I type goes directly out to "to_sv", and whatever is written to "from_sv" gets printed out.
If I am going about this completely wrong, then by all means suggest the correct way! Thank you so much,
Nachum Kanovsky
Edit: You can output to a named pipe and read from an other one in the same terminal. You can also disable keys to be echoed to the terminal using stty -echo.
mkfifo /tmp/from
mkfifo /tmp/to
stty -echo
cat /tmp/from & cat > /tmp/to
Whit this command everything you write goes to /tmp/to and is not echoed and everything written to /tmp/from will be echoed.
Update: I have found a way to send every chars inputed to the /tmp/to one at a time. Instead of cat > /tmp/to use this command:
while IFS= read -n1 c;
do
if [ -z "$c" ]; then
printf "\n" >> /tmp/to;
fi;
printf "%s" "$c" >> /tmp/to;
done
You probably want to use exec as in:
exec > to_sv
exec < from_sv
See sections 19.1. and 19.2. in the Advanced Bash-Scripting Guide - I/O Redirection
Instead of cat /tmp/from & you may use tail -f /tmp/from & (at least here on Mac OS X 10.6.7 this prevented a deadlock if I echo more than once to /tmp/from).
Based on Lynch's code:
# terminal window 1
(
rm -f /tmp/from /tmp/to
mkfifo /tmp/from
mkfifo /tmp/to
stty -echo
#cat -u /tmp/from &
tail -f /tmp/from &
bgpid=$!
trap "kill -TERM ${bgpid}; stty echo; exit" 1 2 3 13 15
while IFS= read -n1 c;
do
if [ -z "$c" ]; then
printf "\n" >> /tmp/to
fi;
printf "%s" "$c" >> /tmp/to
done
)
# terminal window 2
(
tail -f /tmp/to &
bgpid=$!
trap "kill -TERM ${bgpid}; stty echo; exit" 1 2 3 13 15
wait
)
# terminal window 3
echo "hello from /tmp/from" > /tmp/from

Timeout command on Mac OS X?

Is there an alternative for the timeout command on Mac OSx. The basic requirement is I am able to run a command for a specified amount of time.
e.g:
timeout 10 ping google.com
This program runs ping for 10s on Linux.
You can use
brew install coreutils
And then whenever you need timeout, use
gtimeout
..instead. To explain why here's a snippet from the Homebrew Caveats section:
Caveats
All commands have been installed with the prefix 'g'.
If you really need to use these commands with their normal names, you
can add a "gnubin" directory to your PATH from your bashrc like:
PATH="/usr/local/opt/coreutils/libexec/gnubin:$PATH"
Additionally, you can access their man pages with normal names if you add
the "gnuman" directory to your MANPATH from your bashrc as well:
MANPATH="/usr/local/opt/coreutils/libexec/gnuman:$MANPATH"
Another simple approach that works pretty much cross platform (because it uses perl which is nearly everywhere) is this:
function timeout() { perl -e 'alarm shift; exec #ARGV' "$#"; }
Snagged from here:
https://gist.github.com/jaytaylor/6527607
Instead of putting it in a function, you can just put the following line in a script, and it'll work too:
timeout.sh
perl -e 'alarm shift; exec #ARGV' "$#";
or a version that has built in help/examples:
timeout.sh
#!/usr/bin/env bash
function show_help()
{
IT=$(cat <<EOF
Runs a command, and times out if it doesnt complete in time
Example usage:
# Will fail after 1 second, and shows non zero exit code result
$ timeout 1 "sleep 2" 2> /dev/null ; echo \$?
142
# Will succeed, and return exit code of 0.
$ timeout 1 sleep 0.5; echo \$?
0
$ timeout 1 bash -c 'echo "hi" && sleep 2 && echo "bye"' 2> /dev/null; echo \$?
hi
142
$ timeout 3 bash -c 'echo "hi" && sleep 2 && echo "bye"' 2> /dev/null; echo \$?
hi
bye
0
EOF
)
echo "$IT"
exit
}
if [ "$1" == "help" ]
then
show_help
fi
if [ -z "$1" ]
then
show_help
fi
#
# Mac OS-X does not come with the delightfully useful `timeout` program. Thankfully a rough BASH equivalent can be achieved with only 2 perl statements.
#
# Originally found on SO: http://stackoverflow.com/questions/601543/command-line-command-to-auto-kill-a-command-after-a-certain-amount-of-time
#
perl -e 'alarm shift; exec #ARGV' "$#";
As kvz stated simply use homebrew:
brew install coreutils
Now the timeout command is already ready to use - no aliases are required (and no gtimeout required, although also available).
You can limit execution time of any program using this command:
ping -t 10 google.com & sleep 5; kill $!
The Timeout Package from Ubuntu / Debian can be made to compile on Mac and it works.
The package is available at http://packages.ubuntu.com/lucid/timeout
You can do ping -t 10 google.com >nul
the >nul gets rid of the output. So instead of showing 64 BYTES FROM 123.45.67.8 BLAH BLAH BLAH it'll just show a blank newline until it times out. -t flag can be changed to any number.

How can I detect if my shell script is running through a pipe?

How do I detect from within a shell script if its standard output is being sent to a terminal or if it's piped to another process?
The case in point: I'd like to add escape codes to colorize output, but only when run interactively, but not when piped, similar to what ls --color does.
In a pure POSIX shell,
if [ -t 1 ] ; then echo terminal; else echo "not a terminal"; fi
returns "terminal", because the output is sent to your terminal, whereas
(if [ -t 1 ] ; then echo terminal; else echo "not a terminal"; fi) | cat
returns "not a terminal", because the output of the parenthetic element is piped to cat.
The -t flag is described in man pages as
-t fd True if file descriptor fd is open and refers to a terminal.
... where fd can be one of the usual file descriptor assignments:
0: standard input
1: standard output
2: standard error
There is no foolproof way to determine if STDIN, STDOUT, or STDERR are being piped to/from your script, primarily because of programs like ssh.
Things that "normally" work
For example, the following bash solution works correctly in an interactive shell:
[[ -t 1 ]] && \
echo 'STDOUT is attached to TTY'
[[ -p /dev/stdout ]] && \
echo 'STDOUT is attached to a pipe'
[[ ! -t 1 && ! -p /dev/stdout ]] && \
echo 'STDOUT is attached to a redirection'
But they don't always work
However, when executing this command as a non-TTY ssh command, STD streams always looks like they are being piped. To demonstrate this, using STDIN because it's easier:
# CORRECT: Forced-tty mode correctly reports '1', which represents
# no pipe.
ssh -t localhost '[[ -p /dev/stdin ]]; echo ${?}'
# CORRECT: Issuing a piped command in forced-tty mode correctly
# reports '0', which represents a pipe.
ssh -t localhost 'echo hi | [[ -p /dev/stdin ]]; echo ${?}'
# INCORRECT: Non-tty mode reports '0', which represents a pipe,
# even though one isn't specified here.
ssh -T localhost '[[ -p /dev/stdin ]]; echo ${?}'
Why it matters
This is a pretty big deal, because it implies that there is no way for a bash script to tell whether a non-tty ssh command is being piped or not. Note that this unfortunate behavior was introduced when recent versions of ssh started using pipes for non-TTY STDIO. Prior versions used sockets, which COULD be differentiated from within bash by using [[ -S ]].
When it matters
This limitation normally causes problems when you want to write a bash script that has behavior similar to a compiled utility, such as cat. For example, cat allows the following flexible behavior in handling various input sources simultaneously, and is smart enough to determine whether it is receiving piped input regardless of whether non-TTY or forced-TTY ssh is being used:
ssh -t localhost 'echo piped | cat - <( echo substituted )'
ssh -T localhost 'echo piped | cat - <( echo substituted )'
You can only do something like that if you can reliably determine if pipes are involved or not. Otherwise, executing a command that reads STDIN when no input is available from either pipes or redirection will result in the script hanging and waiting for STDIN input.
Other things that don't work
In trying to solve this problem, I've looked at several techniques that fail to solve the problem, including ones that involve:
examining SSH environment variables
using stat on /dev/stdin file descriptors
examining interactive mode via [[ "${-}" =~ 'i' ]]
examining tty status via tty and tty -s
examining ssh status via [[ "$(ps -o comm= -p $PPID)" =~ 'sshd' ]]
Note that if you are using an OS that supports the /proc virtual filesystem, you might have luck following the symbolic links for STDIO to determine whether a pipe is being used or not. However, /proc is not a cross-platform, POSIX-compatible solution.
I'm extremely interesting in solving this problem, so please let me know if you think of any other technique that might work, preferably POSIX-based solutions that work on both Linux and BSD.
The command test (builtin in Bash), has an option to check if a file descriptor is a tty.
if [ -t 1 ]; then
# Standard output is a tty
fi
See "man test" or "man bash" and search for "-t".
You don't mention which shell you are using, but in Bash, you can do this:
#!/bin/bash
if [[ -t 1 ]]; then
# stdout is a terminal
else
# stdout is not a terminal
fi
On Solaris, the suggestion from Dejay Clayton works mostly. The -p does not respond as desired.
File bash_redir_test.sh looks like:
[[ -t 1 ]] && \
echo 'STDOUT is attached to TTY'
[[ -p /dev/stdout ]] && \
echo 'STDOUT is attached to a pipe'
[[ ! -t 1 && ! -p /dev/stdout ]] && \
echo 'STDOUT is attached to a redirection'
On Linux, it works great:
:$ ./bash_redir_test.sh
STDOUT is attached to TTY
:$ ./bash_redir_test.sh | xargs echo
STDOUT is attached to a pipe
:$ rm bash_redir_test.log
:$ ./bash_redir_test.sh >> bash_redir_test.log
:$ tail bash_redir_test.log
STDOUT is attached to a redirection
On Solaris:
:# ./bash_redir_test.sh
STDOUT is attached to TTY
:# ./bash_redir_test.sh | xargs echo
STDOUT is attached to a redirection
:# rm bash_redir_test.log
bash_redir_test.log: No such file or directory
:# ./bash_redir_test.sh >> bash_redir_test.log
:# tail bash_redir_test.log
STDOUT is attached to a redirection
:#
The following code (tested only in Linux Bash 4.4) should not be considered portable nor recommended, but for the sake of completeness here it is:
ls /proc/$$/fdinfo/* >/dev/null 2>&1 || grep -q 'flags: 00$' /proc/$$/fdinfo/0 && echo "pipe detected"
I don't know why, but it seems that file descriptor "3" is somehow created when a Bash function has standard input piped.

Resources