Hide or suppress arguments value passed to a shell script - bash

From a local machine I am running a shell script on a remote server and passing some arguments to the scripts. Like test.sh "name" "age"
This is my script:
#!/bin/bash
echo $1
echo $2
On the remote server while the script is executing and if I run ps aux | grep .sh i could see the value of the two parameters. Like bash -s name age
Is there a way to suppress or hide the values in the running shell process so that one can see the parameters ?

I have an idea. You could create a global environment variable with unique name and save there the positional arguments, then re-exec your process and get the arguments:
#!/bin/bash
if [[ -z "$MYARGS" ]]; then
export MYARGS="$(printf "%q " "$#")"
exec "$0"
fi
eval set -- "$MYARGS"
printf -- "My arguments:\n"
printf -- "-- %s\n" "$#"
sleep infinity
It will hide it from ps aux:
$ ps aux | grep 1.sh
kamil 196704 0.4 0.0 9768 2084 pts/1 S+ 16:49 0:00 /bin/bash /tmp/1.sh
kamil 196777 0.0 0.0 8924 1640 pts/2 S+ 16:49 0:00 grep 1.sh
The environment variable could be still extracted from /proc:
$ cat /proc/196704/environ | sed -z '/MYARGS/!d'; echo
MYARGS=1 2 3 54 5
Another way might be writing the positional arguments as a string on stdin and pass it to outselves with original input:
#!/bin/bash
if [[ -z "$MYARGS" ]]; then
export MYARGS=1 # just so it's set
# restart outselves with no arguments
exec "$0" < <(
# Stream arguments on stdin on one line
printf "%q " "$#" | xxd -p | tr -d '\n'
echo
exec cat
)
fi
IFS= read -r args # read _one line_ of input - it's our arguments
args=$(xxd -r -p <<<"$args") # encoded with xxd
eval set -- "$args"
printf -- "My arguments:\n"
printf -- "-- %s\n" "$#"
sleep infinity

Here's a way to take the cmd line args and read args from stdin:
#!/usr/bin/env bash
args=()
for arg; do
printf "%d\t%s\n" $((++c)) "$arg"
args+=("$arg")
done
if ! [[ -t 0 ]]; then
while IFS= read -r arg; do
args+=("$arg")
done
fi
declare -p args
Do you can do:
script.sh hello world
printf "%s\n" hello world | script.sh
echo world | script.sh hello

Related

How to run commands off of a pipe

I would like to run commands such as "history" or "!23" off of a pipe.
How might I achieve this?
Why does the following command not work?
echo "history" | xargs eval $1
To answer (2) first:
history and eval are both bash builtins. So xargs cannot run either of them.
xargs does not use $1 arguments. man xargs for the correct syntax.
For (1), it doesn't really make much sense to do what you are attempting because shell history is not likely to be synchronised between invocations, but you could try something like:
{ echo 'history'; echo '!23'; } | bash -i
or:
{ echo 'history'; echo '!23'; } | while read -r cmd; do eval "$cmd"; done
Note that pipelines run inside subshells. Environment changes are not retained:
x=1; echo "x=2" | while read -r cmd; do eval "$cmd"; done; echo "$x"
You can try like this
First redirect the history commands to a file (cut out the line numbers)
history | cut -c 8- > cmd.txt
Now Create this script hcmd.sh(Referred to this Read a file line by line assigning the value to a variable)
#!/bin/bash
while IFS='' read -r line || [[ -n "$line" ]]; do
echo "Text read from file: $line"
$line
done < "cmd.txt"
Run it like this
./hcmd.sh

Ignoring all but the (multi-line) results of the last query sent to a program

I have an executable that accepts queries from stdin and responds to them, reading until EOF. Additionally I have an input file and a special command, let's call those EXEC, FILE and CMD respectively.
What I need to do is:
Pass FILE to EXEC as input.
Disregard all the output corresponding to commands read from FILE (/dev/null/).
Pass CMD as the last command.
Fetch output for the last command and save it in a variable.
EXEC's output can be multiline for each query.
I know how to pass FILE + CMD into the EXEC:
echo ${CMD} | cat ${FILE} - | ${EXEC}
but I have no idea how to fetch only output resulting from CMD.
Is there a magical one-liner that does this?
After looking around I've found the following partial solution:
mkfifo mypipe
(tail -f mypipe) | ${EXEC} &
cat ${FILE} | while read line; do
echo ${line} > mypipe
done
echo ${CMD} > mypipe
This allows me to redirect my input, but now the output gets printed to screen. I want to ignore all the output produced by EXEC in the while loop and get only what it prints for the last line.
I tried what first came into my mind, which is:
(tail -f mypipe) | ${EXEC} > somefile &
But it didn't work, the file was empty.
This is race-prone -- I'd suggest putting in a delay after the kill, or using an explicit sigil to determine when it's been received. That said:
#!/usr/bin/env bash
# route FD 4 to your output routine
exec 4> >(
output=; trap 'output=1' USR1
while IFS= read -r line; do
[[ $output ]] && printf '%s\n' "$line"
done
); out_pid=$!
# Capture the PID for the process substitution above; note that this requires a very
# new version of bash (4.4?)
[[ $out_pid ]] || { echo "ERROR: Your bash version is too old" >&2; exit 1; }
# Run your program in another process substitution, and close the parent's handle on FD 4
exec 3> >("$EXEC" >&4) 4>&-
# cat your file to FD 3...
cat "$file" >&3
# UGLY HACK: Wait to let your program finish flushing output from those commands
sleep 0.1
# notify the subshell writing output to disk that the ignored input is done...
kill -USR1 "$out_pid"
# UGLY HACK: Wait to let the subprocess actually receive the signal and set output=1
sleep 0.1
# ...and then write the command for which you actually want content logged.
echo "command" >&3
In validating this answer, I'm doing the following:
EXEC=stub_function
stub_function() {
local count line
count=0
while IFS= read -r line; do
(( ++count ))
printf '%s: %s\n' "$count" "$line"
done
}
cat >file <<EOF
do-not-log-my-output-1
do-not-log-my-output-2
do-not-log-my-output-3
EOF
file=file
export -f stub_function
export file EXEC
Output is only:
4: command
You could pipe it into a sed:
var=$(YOUR COMMAND | sed '$!d')
This will put only the last line into the variable
I think, that your proram EXEC does something special (open connection or remember state). When that is not the case, you can use
${EXEC} < ${FILE} > /dev/null
myvar=$(echo ${CMD} | ${EXEC})
Or with normal commands:
# Do not use (printf "==%s==\n" 1 2 3 ; printf "oo%soo\n" 4 5 6) | cat
printf "==%s==\n" 1 2 3 | cat > /dev/null
myvar=$(printf "oo%soo\n" 4 5 6 | cat)
When you need to give all input to one process, perhaps you can think of a marker that you can filter on:
(printf "==%s==\n" 1 2 3 ; printf "%s\n" "marker"; printf "oo%soo\n" 4 5 6) | cat | sed '1,/marker/ d'
You should examine your EXEC what could be used. When it is running SQL, you might use something like
(cat ${FILE}; echo 'select "DamonMarker" from dual;' ; echo ${CMD} ) |
${EXEC} | sed '1,/DamonMarker/ d'
and write this in a var with
myvar=$( (cat ${FILE}; echo 'select "DamonMarker" from dual;' ; echo ${CMD} ) |
${EXEC} | sed '1,/DamonMarker/ d' )

reading from serial using shellscript

I have a serial port device that I would like to test using Linux command line.
And if I run the following command from terminal, it gives output
cat < /dev/ttyS0 &
This command opens the serial port and relays what it reads from it to its stdout.So, I tried it from shell script file but it is not working
fName="test.txt";
awk '
BEGIN { RS = "" ; FS = "\n" }
{
address = '/dev/ttyS0';
system("cat < " address );
}
END {
}' "$fName";
But it is not working and giving output.How can I listen to communication between a process and a serial port? Thanks
Using awk timeouts
I've successfully read something under dash, be using GAWK_READ_TIMEOUT environment variable:
out=`GAWK_READ_TIMEOUT=3000 awk '{print}' </dev/ttyS0 & sleep 1 ; echo foo >/dev/ttyS0`
On my terminal, this output:
echo "$out"
foo
Password:
or
echo "$out"
Login incorrect
testhost login:
Using bash timeouts
You could use FD under bash as:
exec 5>/dev/ttyS0
exec 6</dev/ttyS0
while read -t .1 -u 6 line;do
echo $line
done
or, to read unfinished lines:
while IFS= read -d '' -t .1 -u 6 -rn 1 char;do
echo -n "$char"
done
echo
So you could:
echo 'root' >&5
while IFS= read -d '' -t .1 -u 6 -rn 1 char;do
echo -n "$char"
done
echo 'password is 1234' >&5
while IFS= read -d '' -t .1 -u 6 -rn 1 char;do
echo -n "$char"
done
... Once done, you could close FD by running:
exec 6<&-
exec 5>&-
Sample bash poor terminal script
I've logged and test some commands with:
#!/bin/bash
exec 5>/dev/ttyS0
exec 6</dev/ttyS0
readbuf() {
while IFS= read -d '' -t .1 -u 6 -rn 1 char;do
echo -n "$char"
done
};
while [ "$cmd" != "tquit" ] ;do
readbuf
read cmd
echo >&5 "$cmd"
done

Using <<< throws error on sh shell not in bash

In Sh
[SunOs] /opt # sh
[\h] \w \$ read -a array <<< "1 2 3";echo ${array[1]}
syntax error: `<' unexpected
In Bash
[SunOs] ~ # bash
[SunOs] ~ # read -a array <<< "1 2 3";echo ${array[1]}
2
Why the Error thorwn in "sh" shell, i'm using SunOS 5.10 Generic_147440-10 sun4v sparc sun4v
Herestrings aren't supported in sh.
This causes the error when you try to run it using sh.
As a workaround you may use the builtin POSIX command set to assign your arguments to the positional parameters $1, $2, ... or the positional parameter array $# respectively
{
IFS="`printf ' \n\t'`"
export IFS
printf '%s' "$IFS" | od -b
set -- `printf '%s' "1 2 3"`
echo "$0"
echo "$1"
echo "$2"
echo "$3"
echo "$#"
}

Bash - output of command seems to be an integer but "[" complains

I am checking to see if a process on a remote server has been killed. The code I'm using is:
if [ `ssh -t -t -i id_dsa headless#remoteserver.com "ps -auxwww |grep pipeline| wc -l" | sed -e 's/^[ \t]*//'` -lt 3 ]
then
echo "PIPELINE STOPPED SUCCESSFULLY"
exit 0
else
echo "PIPELINE WAS NOT STOPPED SUCCESSFULLY"
exit 1
fi
However when I execute this I get:
: integer expression expected
PIPELINE WAS NOT STOPPED SUCCESSFULLY
1
The actual value returned is "1" with no whitespace. I checked that by:
vim <(ssh -t -t -i id_dsa headless#remoteserver.com "ps -auxwww |grep pipeline| wc -l" | sed -e 's/^[ \t]*//')
and then ":set list" which showed only the integer and a line feed as the returned value.
I'm at a loss here as to why this is not working.
If the output of the ssh command is truly just an integer preceded by optional tabs, then you shouldn't need the sed command; the shell will strip the leading and/or trailing whitespace as unnecessary before using it as an operand for the -lt operator.
if [ $(ssh -tti id_dsa headless#remoteserver.com "ps -auxwww | grep -c pipeline") -lt 3 ]; then
It is possible that result of the ssh is not the same when you run it manually as when it runs in the shell. You might try saving it in a variable so you can output it before testing it in your script:
result=$( ssh -tti id_dsa headless#remoteserver.com "ps -auxwww | grep -c pipeline" )
if [ $result -lt 3 ];
The return value you get is not entirely a digit. Maybe some shell-metacharacter/linefeed/whatever gets into your way here:
#!/bin/bash
var=$(ssh -t -t -i id_dsa headless#remoteserver.com "ps auxwww |grep -c pipeline")
echo $var
# just to prove my point here
# Remove all digits, and look wether there is a rest -> then its not integer
test -z "$var" -o -n "`echo $var | tr -d '[0-9]'`" && echo not-integer
# get out all the digits to use them for the arithmetic comparison
var2=$(grep -o "[0-9]" <<<"$var")
echo $var2
if [[ $var2 -lt 3 ]]
then
echo "PIPELINE STOPPED SUCCESSFULLY"
exit 0
else
echo "PIPELINE WAS NOT STOPPED SUCCESSFULLY"
exit 1
fi
As user mbratch noticed I was getting a "\r" in the returned value in addition to the expected "\n". So I changed my sed script so that it stripped out the "\r" instead of the whitespace (which chepner pointed out was unnecessary).
sed -e 's/\r*$//'

Resources