Running for loop using some other user - bash

I am trying to execute a command using some other user. Here is my code
sudo -i -u someuser bash -c 'for i in 1 2 3; do echo $i; done'
I am expecting output as 1 2 3 but executed with someuser. Above code printing blank lines. I tried to add some other commands
sudo -i -u someuser bash -c 'for i in 1 2 3; do ls; done'
somefile1.txt somefile2.txt
somefile1.txt somefile2.txt
somefile1.txt somefile2.txt
If I try loop with the current user it gives expected output
for i in 1 2 3; do echo $i; done
1
2
3
Looks like bash is unable to resolve variable $i inside for loop. I tried escape character \ but not helping.

TL;DR: Don't use sudo -i with bash -c
The usual way to use sudo -i is without any arguments, in which case it simply starts an interactive login shell.
If you really must have a login shell for some reason (which isn't good practice for running scripts), it's much saner to simply add the extra arguments needed to make your shell a login shell to the bash command itself, and keep sudo out of the business of changing the arguments you pass it:
sudo -u someuser bash -lic 'for i in 1 2 3; do echo "$i"; done'
...or...
sudo -u someuser -i <<'EOF'
for i in 1 2 3; do echo "$i"; done
EOF
The Gory Details
When you use sudo -i with arguments, it rewrites the argument list given to concatenate the arguments together into a single command that can be put into the argument after -c, so you get something like {"sh", "-c", "bash -c ..."}. In concatenating arguments together, sudo uses the logic from parse_args handling for MODE_LOGIN_SHELL, adding an escape character before all characters that are not alphanumeric, _, - or $; keeping $ out of this list was introduced in commitish 6484574f, tagged as a fix for bug #564 (which was introduced by the fix to bug #413 -- personally, I think we would all be better off if bug 413 had been left in place rather than making any attempt to fix it).
See also sh -c does not expand positional parameters if I run it from sudo --login over at Unix & Linux Stack Exchange.
Since this behavior was deliberately put in place in 2013, I doubt there's any fixing it at this point -- any change to sudo's escaping behavior has the potential to modify the security properties of existing scripts.

Related

Create and write systemd service from Shell script Failed [duplicate]

This question already has answers here:
How do I use sudo to redirect output to a location I don't have permission to write to? [closed]
(15 answers)
sudo cat << EOF > File doesn't work, sudo su does [duplicate]
(5 answers)
Closed 1 year ago.
I am trying to automate the addition of a repository source in my arch's pacman.conf file but using the echo command in my shell script. However, it fails like this:-
sudo echo "[archlinuxfr]" >> /etc/pacman.conf
sudo echo "Server = http://repo.archlinux.fr/\$arch" >> /etc/pacman.conf
sudo echo " " >> /etc/pacman.conf
-bash: /etc/pacman.conf: Permission denied
If I make changes to /etc/pacman.conf manually using vim, by doing
sudo vim /etc/pacman.conf
and quiting vim with :wq, everything works fine and my pacman.conf has been manually updated without "Permission denied" complaints.
Why is this so? And how do I get sudo echo to work? (btw, I tried using sudo cat too but that failed with Permission denied as well)
As #geekosaur explained, the shell does the redirection before running the command. When you type this:
sudo foo >/some/file
Your current shell process makes a copy of itself that first tries to open /some/file for writing, then if that succeeds it makes that file descriptor its standard output, and only if that succeeds does it execute sudo. This is failing at the first step.
If you're allowed (sudoer configs often preclude running shells), you can do something like this:
sudo bash -c 'foo >/some/file'
But I find a good solution in general is to use | sudo tee instead of > and | sudo tee -a instead of >>. That's especially useful if the redirection is the only reason I need sudo in the first place; after all, needlessly running processes as root is precisely what sudo was created to avoid. And running echo as root is just silly.
echo '[archlinuxfr]' | sudo tee -a /etc/pacman.conf >/dev/null
echo 'Server = http://repo.archlinux.fr/$arch' | sudo tee -a /etc/pacman.conf >/dev/null
echo ' ' | sudo tee -a /etc/pacman.conf >/dev/null
I added > /dev/null on the end because tee sends its output to both the named file and its own standard output, and I don't need to see it on my terminal. (The tee command acts like a "T" connector in a physical pipeline, which is where it gets its name.) And I switched to single quotes ('...') instead of doubles ("...") so that everything is literal and I didn't have to put a backslash in front of the $ in $arch. (Without the quotes or backslash, $arch would get replaced by the value of the shell parameter arch, which probably doesn't exist, in which case the $arch is replaced by nothing and just vanishes.)
So that takes care of writing to files as root using sudo. Now for a lengthy digression on ways to output newline-containing text in a shell script. :)
To BLUF it, as they say, my preferred solution would be to just feed a here-document into the above sudo tee command; then there is no need for cat or echo or printf or any other commands at all. The single quotation marks have moved to the sentinel introduction <<'EOF', but they have the same effect there: the body is treated as literal text, so $arch is left alone:
sudo tee -a /etc/pacman.conf >/dev/null <<'EOF'
[archlinuxfr]
Server = http://repo.archlinux.fr/$arch
EOF
But while that's how I'd do it, there are alternatives. Here are a few:
You can stick with one echo per line, but group all of them together in a subshell, so you only have to append to the file once:
(echo '[archlinuxfr]'
echo 'Server = http://repo.archlinux.fr/$arch'
echo ' ') | sudo tee -a /etc/pacman.conf >/dev/null
If you add -e to the echo (and you're using a shell that supports that non-POSIX extension), you can embed newlines directly into the string using \n:
# NON-POSIX - NOT RECOMMENDED
echo -e '[archlinuxfr]\nServer = http://repo.archlinux.fr/$arch\n ' |
sudo tee -a /etc/pacman.conf >/dev/null
But as it says above, that's not POSIX-specified behavior; your shell might just echo a literal -e followed by a string with a bunch of literal \ns instead. The POSIX way of doing that is to use printf instead of echo; it automatically treats its argument like echo -e does, but doesn't automatically append a newline at the end, so you have to stick an extra \n there, too:
printf '[archlinuxfr]\nServer = http://repo.archlinux.fr/$arch\n \n' |
sudo tee -a /etc/pacman.conf >/dev/null
With either of those solutions, what the command gets as an argument string contains the two-character sequence \n, and it's up to the command program itself (the code inside printf or echo) to translate that into a newline. In many modern shells, you have the option of using ANSI quotes $'...', which will translate sequences like \n into literal newlines before the command program ever sees the string. That means such strings work with any command whatsoever, including plain old -e-less echo:
echo $'[archlinuxfr]\nServer = http://repo.archlinux.fr/$arch\n ' |
sudo tee -a /etc/pacman.conf >/dev/null
But, while more portable than echo -e, ANSI quotes are still a non-POSIX extension.
And again, while those are all options, I prefer the straight tee <<EOF solution above.
The problem is that the redirection is being processed by your original shell, not by sudo. Shells are not capable of reading minds and do not know that that particular >> is meant for the sudo and not for it.
You need to:
quote the redirection ( so it is passed on to sudo)
and use sudo -s (so that sudo uses a shell to process the quoted redirection.)
http://www.innovationsts.com/blog/?p=2758
As the instructions are not that clear above I am using the instructions from that blog post. With examples so it is easier to see what you need to do.
$ sudo cat /root/example.txt | gzip > /root/example.gz
-bash: /root/example.gz: Permission denied
Notice that it’s the second command (the gzip command) in the pipeline that causes the error. That’s where our technique of using bash with the -c option comes in.
$ sudo bash -c 'cat /root/example.txt | gzip > /root/example.gz'
$ sudo ls /root/example.gz
/root/example.gz
We can see form the ls command’s output that the compressed file creation succeeded.
The second method is similar to the first in that we’re passing a command string to bash, but we’re doing it in a pipeline via sudo.
$ sudo rm /root/example.gz
$ echo "cat /root/example.txt | gzip > /root/example.gz" | sudo bash
$ sudo ls /root/example.gz
/root/example.gz
sudo bash -c 'echo "[archlinuxfr]" >> /etc/pacman.conf'
STEP 1 create a function in a bash file (write_pacman.sh)
#!/bin/bash
function write_pacman {
tee -a /etc/pacman.conf > /dev/null << 'EOF'
[archlinuxfr]
Server = http://repo.archlinux.fr/\$arch
EOF
}
'EOF' will not interpret $arch variable.
STE2 source bash file
$ source write_pacman.sh
STEP 3 execute function
$ write_pacman
append files (sudo cat):
cat <origin-file> | sudo tee -a <target-file>
append echo to file (sudo echo):
echo <origin> | sudo tee -a <target-file>
(EXTRA) disregard the ouput:
echo >origin> | sudo tee -a <target-file> >/dev/null

Allow user input in second command in bash pipe

I'm looking for how I might allow user input in a second command in a bash statement and I'm not sure how to go about it. I'd like to be able to provide a one-liner for someone to be able to install my application, but part of that application process requires asking some questions.
The current script setup looks like:
curl <url/to/bootstrap.sh> | bash
and then boostrap.sh does:
if [ $UID -ne 0 ]; then
echo "This script requires root to run. Restarting the script under root."
exec sudo $0 "$#"
exit $?
fi
git clone <url_to_repo> /usr/local/repo/
bash /usr/local/repo/.setup/install_system.sh
which in turn calls a python3 script that asks for input.
I know that the the curl in the first line is using stdin and so that might make what I'm asking impossible and that it has to be two lines to ever work:
wget <url/to/boostrap.sh>
bash bootstrap.sh
You can restructure your script to run this way:
bash -c "$(curl -s http://0.0.0.0//test.bash 2>/dev/null)"
foo
wololo:a
a
My test.bash is really just
#!/bin/bash
echo foo
python -c 'x = raw_input("wololo:");print(x)'`
To demonstrate that stdin can be read from in this way. Sure it creates a subshell to take care of curl but it allows you to keep reading from stdin as well.

Bash: Execute command WITH ARGUMENTS in new terminal [duplicate]

This question already has answers here:
how do i start commands in new terminals in BASH script
(2 answers)
Closed 20 days ago.
So i want to open a new terminal in bash and execute a command with arguments.
As long as I only take something like ls as command it works fine, but when I take something like route -n , so a command with arguments, it doesnt work.
The code:
gnome-terminal --window-with-profile=Bash -e whoami #WORKS
gnome-terminal --window-with-profile=Bash -e route -n #DOESNT WORK
I already tried putting "" around the command and all that but it still doesnt work
You can start a new terminal with a command using the following:
gnome-terminal --window-with-profile=Bash -- \
bash -c "<command>"
To continue the terminal with the normal bash profile, add exec bash:
gnome-terminal --window-with-profile=Bash -- \
bash -c "<command>; exec bash"
Here's how to create a Here document and pass it as the command:
cmd="$(printf '%s\n' 'wc -w <<-EOF
First line of Here document.
Second line.
The output of this command will be '15'.
EOF' 'exec bash')"
xterm -e bash -c "${cmd}"
To open a new terminal and run an initial command with a script, add the following in a script:
nohup xterm -e bash -c "$(printf '%s\nexec bash' "$*")" &>/dev/null &
When $* is quoted, it expands the arguments to a single word, with each separated by the first character of IFS. nohup and &>/dev/null & are used only to allow the terminal to run in the background.
Try this:
gnome-terminal --window-with-profile=Bash -e 'bash -c "route -n; read"'
The final read prevents the window from closing after execution of the previous commands. It will close when you press a key.
If you want to experience headaches, you can try with more quote nesting:
gnome-terminal --window-with-profile=Bash \
-e 'bash -c "route -n; read -p '"'Press a key...'"'"'
(In the following examples there is no final read. Let’s suppose we fixed that in the profile.)
If you want to print an empty line and enjoy multi-level escaping too:
gnome-terminal --window-with-profile=Bash \
-e 'bash -c "printf \\\\n; route -n"'
The same, with another quoting style:
gnome-terminal --window-with-profile=Bash \
-e 'bash -c '\''printf "\n"; route -n'\'
Variables are expanded in double quotes, not single quotes, so if you want them expanded you need to ensure that the outermost quotes are double:
command='printf "\n"; route -n'
gnome-terminal --window-with-profile=Bash \
-e "bash -c '$command'"
Quoting can become really complex. When you need something more advanced that a simple couple of commands, it is advisable to write an independent shell script with all the readable, parametrized code you need, save it somewhere, say /home/user/bin/mycommand, and then invoke it simply as
gnome-terminal --window-with-profile=Bash -e /home/user/bin/mycommand

What does bash -s do?

I'm new to bash and trying to understand what the script below is doing, i know -e is exit but i'm not sure what -se or what the $delimiter is for?
$delimiter = 'EOF-MY-APP';
$process = new SSH(
"ssh $target 'bash -se' << \\$delimiter".PHP_EOL
.'set -e'.PHP_EOL
.$command.PHP_EOL
.$delimiter
);
The -s options is usually used along with the curl $script_url | bash pattern. For example,
curl -L https://chef.io/chef/install.sh | sudo bash -s -- -P chefdk
-s makes bash read commands (the "install.sh" code as downloaded by "curl") from stdin, and accept positional parameters nonetheless.
-- lets bash treat everything which follows as positional parameters instead of options.
bash will set the variables $1 and $2 of the "install.sh" code to -P and to chefdk, respectively.
Reference: https://www.experts-exchange.com/questions/28671064/what-is-the-role-of-bash-s.html
From man bash:
-s If the -s option is present, or if no arguments remain after
option processing, then commands are read from the standard
input. This option allows the positional parameters to be
set when invoking an interactive shell.
From help set:
-e Exit immediately if a command exits with a non-zero status.
So, this tells bash to read the script to execute from Standard Input, and to exit immediately if any command in the script (from stdin) fails.
The delimiter is used to mark the start and end of the script. This is called a Here Document or a heredoc.

Trouble escaping quotes in a variable held string during a Sub-shell execution call [duplicate]

This question already has answers here:
Why does shell ignore quoting characters in arguments passed to it through variables? [duplicate]
(3 answers)
Closed 6 years ago.
I'm trying to write a database call from within a bash script and I'm having problems with a sub-shell stripping my quotes away.
This is the bones of what I am doing.
#---------------------------------------------
#! /bin/bash
export COMMAND='psql ${DB_NAME} -F , -t --no-align -c "${SQL}" -o ${EXPORT_FILE} 2>&1'
PSQL_RETURN=`${COMMAND}`
#---------------------------------------------
If I use an 'echo' to print out the ${COMMAND} variable the output looks fine:
echo ${COMMAND}
screen output:-
#---------------
psql drupal7 -F , -t --no-align -c "SELECT DISTINCT hostname FROM accesslog;" -o /DRUPAL/INTERFACES/EXPORTS/ip_list.dat 2>&1
#---------------
Also if I cut and paste this screen output it executes just fine.
However, when I try to execute the command as a variable within a sub-shell call, it gives an error message.
The error is from the psql client to the effect that the quotes have been removed from around the ${SQL} string.
The error suggests psql is trying to interpret the terms in the sql string as parameters.
So it seems the string and quotes are composed correctly but the quotes around the ${SQL} variable/string are being interpreted by the sub-shell during the execution call from the main script.
I've tried to escape them using various methods: \", \\", \\\", "", \"" '"', \'"\', ... ...
As you can see from my 'try it all' approach I am no expert and it's driving me mad.
Any help would be greatly appreciated.
Charlie101
Instead of storing command in a string var better to use BASH array here:
cmd=(psql ${DB_NAME} -F , -t --no-align -c "${SQL}" -o "${EXPORT_FILE}")
PSQL_RETURN=$( "${cmd[#]}" 2>&1 )
Rather than evaluating the contents of a string, why not use a function?
call_psql() {
# optional, if variables are already defined in global scope
DB_NAME="$1"
SQL="$2"
EXPORT_FILE="$3"
psql "$DB_NAME" -F , -t --no-align -c "$SQL" -o "$EXPORT_FILE" 2>&1
}
then you can just call your function like:
PSQL_RETURN=$(call_psql "$DB_NAME" "$SQL" "$EXPORT_FILE")
It's entirely up to you how elaborate you make the function. You might like to check for the correct number of arguments (using something like (( $# == 3 ))) before calling the psql command.
Alternatively, perhaps you'd prefer just to make it as short as possible:
call_psql() { psql "$1" -F , -t --no-align -c "$2" -o "$3" 2>&1; }
In order to capture the command that is being executed for debugging purposes, you can use set -x in your script. This will the contents of the function including the expanded variables when the function (or any other command) is called. You can switch this behaviour off using set +x, or if you want it on for the whole duration of the script you can change the shebang to #!/bin/bash -x. This saves you explicitly echoing throughout your script to find out what commands are being run; you can just turn on set -x for a section.
A very simple example script using the shebang method:
#!/bin/bash -x
ec() {
echo "$1"
}
var=$(ec 2)
Running this script, either directly after making it executable or calling it with bash -x, gives:
++ ec 2
++ echo 2
+ var=2
Removing the -x from the shebang or the invocation results in the script running silently.

Resources