Bash Child/Parent Pipe Inheritance Exploit - bash

#!/bin/bash
ipaddr=${1}
rdlnk=$(readlink /proc/$$/fd/0)
user=""
passwd=""
function get_input() {
if grep -Eq "^pipe:|deleted" <<< "${rdlnk}" || [[ -p "${rdlnk}" ]]; then
while IFS= read -r piped_input || break; do
[[ -z "${ipaddr}" ]] && ipaddr="${piped_input}" && continue
[[ -z "${user}" ]] && user="${piped_input}" && continue
[[ -z "${passwd}" ]] && passwd="${piped_input}" && continue
done
fi
echo "Got that IP address you gave me to work on: ${ipaddr}"
[[ -n "${user}" ]] && echo "[... and that user: ${user}]"
[[ -n "${user}" ]] && echo "[... and that users password: ${passwd}]"
}
get_input
exit 0
Normally it's fine:
$> process_ip.bsh 71.123.123.3
Got that IP address you gave me to work on: 71.123.123.3
But, put the parent into a piped loop and watch out:
$ echo -en "71.123.123.3\nroot\ntoor\n" | while read a; do echo "Parent loop, processing: ${a}"; grep -q '^[0-9]\{1,3\}.[0-9]\{1,3\}.[0-9]\{1,3\}.[0-9]\{1,3\}' <<< "${a}" && ./process_ip.bsh "$a"; done
Parent loop, processing: 71.123.123.3
Got that IP address you gave me to work on: 71.123.123.3
[... and that user: root]
[... and that users password: toor]
Ouch. The parent only wanted to provide the IP Address from its pipe to the child. Presuming that the parent must maintain an open pipe with sensitive data in it at the time of the fork to the child process. How can this be prevented?

process_ip.bsh, like any other process, inherits its standard input from its parent. This line
rdlnk=$(readlink /proc/$$/fd/0)
doesn't do exactly what you think it does. It only contains the name of the file the parent is using for standard input because the script is inheriting its standard input from the parent. ($$ is the process ID of the current shell because .process_ip.bsh is a separate process, not merely a subshell started by the parent.)
If you redirect input to process_ip.bsh, you are in complete control of what it receives.
while read a; do
echo "Parent loop, processing: ${a}"
grep -q '^[0-9]\{1,3\}.[0-9]\{1,3\}.[0-9]\{1,3\}.[0-9]\{1,3\}' <<< "${a}" &&
./process_ip.bsh "$a" < /dev/null
done <<EOF
71.123.123.3
root
toor
EOF

Related

How can I pipe output, from a command in an if statement, to a function?

I can't tell if something I'm trying here is simply impossible or if I'm really lacking knowledge in bash's syntax. This is the first script I've written.
I've got a Nextcloud instance that I am backing up daily using a script. I want to log the output of the script as it runs to a log file. This is working fine, but I wanted to see if I could also pipe the Nextcloud occ command's output to the log file too.
I've got an if statement here checking if the file scan fails:
if ! sudo -u "$web_user" "$nextcloud_dir/occ" files:scan --all; then
Print "Error: Failed to scan files. Are you in maintenance mode?"
fi
This works fine and I am able to handle the error if the system cannot execute the command. The error string above is sent to this function:
Print()
{
if [[ "$logging" -eq 1 ]] && [ "$quiet_mode" = "No" ]; then
echo "$1" | tee -a "$log_file"
elif [[ "$logging" -eq 1 ]] && [ "$quiet_mode" = "Yes" ]; then
echo "$1" >> "$log_file"
elif [[ "$logging" -eq 0 ]] && [ "$quiet_mode" = "No" ]; then
echo "$1"
fi
}
How can I make it so the output of the occ command is also piped to the Print() function so it can be logged to the console and log file?
I've tried piping the command after ! using | Print without success.
Any help would be appreciated, cheers!
The Print function doesn't read standard input so there's no point piping data to it. One possible way to do what you want with the current implementation of Print is:
if ! occ_output=$(sudo -u "$web_user" "$nextcloud_dir/occ" files:scan --all 2>&1); then
Print "Error: Failed to scan files. Are you in maintenance mode?"
fi
Print "'occ' output: $occ_output"
Since there is only one line in the body of the if statement you could use || instead:
occ_output=$(sudo -u "$web_user" "$nextcloud_dir/occ" files:scan --all 2>&1) \
|| Print "Error: Failed to scan files. Are you in maintenance mode?"
Print "'occ' output: $occ_output"
The 2>&1 causes both standard output and error output of occ to be captured to occ_output.
Note that the body of the Print function could be simplified to:
[[ $quiet_mode == No ]] && printf '%s\n' "$1"
(( logging )) && printf '%s\n' "$1" >> "$log_file"
See the accepted, and excellent, answer to Why is printf better than echo? for an explanation of why I replaced echo "$1" with printf '%s\n' "$1".
How's this? A bit unorthodox perhaps.
Print()
{
case $# in
0) cat;;
*) echo "$#";;
esac |
if [[ "$logging" -eq 1 ]] && [ "$quiet_mode" = "No" ]; then
tee -a "$log_file"
elif [[ "$logging" -eq 1 ]] && [ "$quiet_mode" = "Yes" ]; then
cat >> "$log_file"
elif [[ "$logging" -eq 0 ]] && [ "$quiet_mode" = "No" ]; then
cat
fi
}
With this, you can either
echo "hello mom" | Print
or
Print "hello mom"
and so your invocation could be refactored to
if ! sudo -u "$web_user" "$nextcloud_dir/occ" files:scan --all; then
echo "Error: Failed to scan files. Are you in maintenance mode?"
fi |
Print
The obvious drawback is that piping into a function loses the exit code of any failure earlier in the pipeline.
For a more traditional approach, keep your original Print definition and refactor the calling code to
if output=$(sudo -u "$web_user" "$nextcloud_dir/occ" files:scan --all 2>&1); then
: nothing
else
Print "error $?: $output"
Print "Error: Failed to scan files. Are you in maintenance mode?"
fi
I would imagine that the error message will be printed to standard error, not standard output; hence the addition of 2>&1
I included the error code $? in the error message in case that would be useful.
Sending and receiving end of a pipe must be a process, typically represented by an executable command. An if statement is not a process. You can of course put such a statement into a process. For example,
echo a | (
if true
then
cat
fi )
causes cat to write a to stdout, because the parenthesis put it into a child process.
UPDATE: As was pointed out in a comment, the explicit subprocess is not needed. One can also do a
echo a | if true
then
cat
fi

Parse input string with pipes and redirects

I have a "pseudo term" wrapper written in bash. it takes input and sends it to a device with the "nc" command. I want it to handle pipes and redirects. This is what I have.
while read -ep "${1}> " CMD
do
[[ ${CMD} =~ '|' ]] && SEP=1
[[ ${CMD} =~ '>' ]] && SEP=2
[[ -z ${SEP} ]] && echo "${CMD}"|nc -4u -w1 ${1} 65432 && continue
CMD1=$(echo ${CMD}|awk -F '[|>]' '{print $1}')
CMD2=$(echo ${CMD}|awk -F '[|>]' '{print $2}')
[[ ${SEP} -eq 1 ]] && echo "${CMD1}"|nc -4u -w1 ${1} 65432 | ${CMD2}
[[ ${SEP} -eq 2 ]] && echo "${CMD1}"|nc -4u -w1 ${1} 65432 > ${CMD2}
done
If checks the command variable to see if it contains a pipe or redirect.
If it does not, then send the command, as-is, to nc.
If there is a pipe or redirect, then break the command into two: the part before, and the part after the pipe or redirect.
Send the first part to "nc", and the output to the second part.
It works. But, it can handle only 1 pipe or redirect. I would like it to be able to handle an indeterminate number of pipes and redirects.
Thanks.

capture output using expect

I want compare the number of files on the remote server and my local directory. I ssh into the server and I was able to capture the output of "ls somewhere/*something | wc -l" using $expect_out(buffer) and store it as a variable. Now my problem is that how do I come back to my local computer and count the files here and compare them. After this comparison, I need to go back to the server and continue the job if the result of the comparison is acceptable.
The easiest thing to do -- including from a correctness perspective -- is to not try to have a single long-running SSH session, but multiple shorter-lived ones (potentially using SSH multiplexing to reuse a single transport between such sessions):
count_remote_files() {
local host=$1 dirname=$2 dirname_q
printf -v dirname_q '%q' "$dirname"
remote_files=$(ssh "$host" "bash -s ${dirname_q}" <<EOF
cd "$1" || exit
set -- *
if [ "$#" -eq 1 ] && [ ! -e "$1" ] && [ ! -L "$1" ]; then
echo "0"
else
echo "$#"
fi
EOF
)
}
count_local_files() {
local dirname=$1
cd "$dirname" || return
set -- *
if [ "$#" -eq 1 ] && [ ! -e "$1" ] && [ ! -L "$1" ]; then
echo "0"
else
echo "$#"
fi
}
if (( $(count_remote_files "$host" "$remote_dir") ==
$(count_local_files "$local_dir") )); then
echo "File count is identical"
else
echo "File count differs"
ssh "$host" 'echo "doing something else now"'
fi
Since you are using Expect, you can easily count the local files by using Tcl commands, since Expect is built on top of Tcl:
set num_local_files [llength [glob somewhere/*something]]
For more info see http://nikit.tcl.tk/page/Tcl+Tutorial+Lesson+25

How can I make a chat script in bash?

I want to make a chat script in bash and I've started out pretty basic and you start off by login in with any name you want, no password required and then you can write commands like connect, clear, exit and so on.
But I want to be able to actually start a chat between two people in the terminal window. Now, I've read a little about IRC but I don't know how to get it to work.
Any help would be appreciated
Here's my script:
#!/bin/bash
ver=1.0
my_ip="127.0.0.1"
function connect()
{
echo -n "Ip-address: "
read ip
if [ -n "$ip" ]
then
exit
fi
}
function check()
{
if [ "${c}" = "connect" ] || [ "${c}" = "open" ]
then
connect
fi
if [ "${c}" = "clear" ] || [ "${c}" = "cls" ]
then
clear
new
fi
if [ "${c}" = "quit" ] || [ "${c}" = "exit" ]
then
echo "[Shutdown]"
exit
fi
}
function new()
{
echo -n "$: "
read c
if [ -n "${c}" ]
then
check
else
echo "Invalid command!"
new
fi
}
function onLogin()
{
clear
echo "Logged in as ${l_name} on [${my_ip}]"
new
}
function login()
{
echo -n "Login: "
read l_name
if [ -n "${l_name}" ]
then
onLogin
else
echo "Invalid input!"
login
fi
}
#execution
clear
echo "Bash Chat v${ver} | Mac"
login
If you truly want to write a chat client in pure bash, it would have to be local chat (same physical machine) rather than network chat.
Assuming that this is adequate for your needs, you can use named pipes (FIFO)
Illustration
Here is an example that illustrates what you can do with two pipes (for bidirectional communication):
mkfifo /tmp/chatpipe1 ; mkfifo /tmp/chatpipe2
(In terminal one): cat > /tmp/chatpipe1
(In terminal two): cat /tmp/chatpipe2
( same and in reverse for Terminals 3 and 4 )
This illustrates that you can have 4 processes in bash, two writing to two pipes and two reading from the same two pipes. Two terminals on the left are for Bob, two on the right are for John.
You can organize all of this into a single script if you understand bash backgrounding, loops (and hopefully traps to clean up on shutdown).
Script
Here is a rudimentary version:
#!/bin/bash
if [ -z "$2" ] ; then
echo "Need names of chat pipes (yours and other's), eg $0 bob john"
exit 1
fi
P1=/tmp/chatpipe${1}
P2=/tmp/chatpipe${2}
[ -p "$P1" ] || mkfifo $P1
[ -p "$P2" ] || mkfifo $P2
# Background cat of incoming pipe,
# also prepend current date to each line)
(cat $P2 | sed "s/^/$(date +%H:%M:%S)> /" ) &
# Feed one notice and then STDIN to outgoing pipe
(echo "$1 joined" ; cat) >> $P1
# Kill all background jobs (the incoming cat) on exit
trap 'kill -9 $(jobs -p)' EXIT
# (Probably should delete the fifo files too)
And a chat session:
Note that this script is only a simple example. If bob and john are different unix accounts, you'll have to be more careful with file permissions (or if you don't care about security, mkfifo -m 777 ... is an option)
A chatroom can be made in literally 10 lines of bash. I posted one on my github earlier today
https://github.com/ErezBinyamin/webCatChat/blob/master/minimalWbserver
Can handle ā€œnā€ users. They just need to connect to :1234
Here's the code:
#!/bin/bash
mkdir -p /tmp/webCat && printf "HTTP/1.1 200 OK\n\n<!doctype html><h2>Erez's netcat chat server!!</h2><form>Username:<br><input type=\"text\" name=\"username\"><br>Message:<br><input type=\"text\" name=\"message\"><div><button>Send data</button></div><button http-equiv=\"refresh\" content=\"0; url=129.21.194:1234\">Refresh</button></form>" > /tmp/webCat/webpage
while [ 1 ]
do
[[ $(head -1 /tmp/webCat/r) =~ "GET /?username" ]] && USER=$(head -1 /tmp/webCat/r | sed 's#.*username=##' | sed 's#&message.*##') && MSG=$(head -1 /tmp/webCat/r | sed 's#.*message=##' | sed 's#HTTP.*##')
[ ${#USER} -gt 1 ] && [ ${#MSG} -gt 1 ] && [ ${#USER} -lt 30 ] && [ ${#MSG} -lt 280 ] && printf "\n%s\t%s\n" "$USER" "$MSG" && printf "<h1>%s\t%s" "$USER" "$MSG" >> /tmp/webCat/webpage
cat /tmp/webCat/webpage | timeout 1 nc -l 1234 > /tmp/webCat/r
unset USER && unset MSG
done

Distinguish different type of input (BASH)

I have a script that must be able to accept both by files and stdin on the first argument. Then if more or less than 1 arguments, reject them
The goal that I'm trying to accomplish is able to accpet using this format
./myscript myfile
AND
./myscript < myfile
What I have so far is
if [ "$#" -eq 1 ]; then #check argument
if [ -t 0 ]; then #check whether input from keyboard (read from github)
VAR=${1:-/dev/stdin} #get value to VAR
#then do stuff here!!
else #if not input from keyboard
VAR=$1
if [ ! -f "$VAR" ]; then #check whether file readable
echo "ERROR!"
else
#do stuff heree!!!
fi
fi
fi
The PROBLEM is when I tried to say
./myscript < myfile
it prints
ERROR!
I dont know whether this is the correct way to do this, I really appreciate for suggestion or the correct code for my problem. Thank you
#!/bin/bash
# if nothing passed in command line pass "/dev/stdin" to myself
# so all below code can be made branch-free
[[ ${#} -gt 0 ]] || set -- /dev/stdin
# loop through the command line arguments, treating them as file names
for f in "$#"; do
echo $f
[[ -r $f ]] && while read line; do echo 'echo:' $line; done < $f
done
Examples:
$ args.sh < input.txt
$ args.sh input.txt
$ cat input.txt | args.sh

Resources