gcloud init command doesn't offer login prompt during a bash script execution.
But it offered the login after I typed exit command manually after script ended.
vagrant#vagrant-ubuntu-trusty-64:~$ exit
logout
Welcome! This command will take you through the configuration of gcloud.
Settings from your current configuration [default] are:
Your active configuration is: [default]
Pick configuration to use:
[1] Re-initialize this configuration [default] with new settings
[2] Create a new configuration
Please enter your numeric choice: 1
Your current configuration has been set to: [default]
To continue, you must log in. Would you like to log in (Y/n)?
My bash script:
#!/usr/bin/env bash
OS=`cat /proc/version`
function setupGCE() {
curl https://sdk.cloud.google.com | bash
`exec -l $SHELL`
`gcloud init --console-only`
`chown -R $USER:$USER ~/`
}
if [[ $OS == *"Ubuntu"* || $OS == *"Debian"* ]]
then
sudo apt-get -y install build-essential python-pip python-dev curl
sudo pip install apache-libcloud
setupGCE
fi
How can I get the login prompt during the bash script execution?
There are a number of issues with the posted snippet.
The correct snippet is (probably):
function setupGCE() {
curl https://sdk.cloud.google.com | bash
gcloud init --console-only
chown -R $USER:$USER ~/
}
The first error with the original, which you discovered yourself (the what of it at least it not the why), is that exec -l $SHELL is blocking progress. It does that because you've run an interactive shell that is now waiting on you for input and the function is waiting for that process to exit before continuing.
Additionally, exec replaces the current process with the spawned process. You got lucky here actually. Had you not wrapped the call to exec in single quotes your function would have exited the shell script entirely when you exited the $SHELL it launched. As it is, however, exec just replaced the sub-shell that the backticks added and so you were left with a child process that could safely exit and return you to the parent/main script.
The second issue is that backticks run the command they surround and then replace themselves with the output. This is why
echo "bar `echo foo` baz"
outputs bar foo baz, etc. (Run set -x before running that to see what commands are actually being run.) So when you write
`gcloud init --console-only`
what you are saying is "run gcloud init --console-only then take its output and replace the command with that" which will then attempt to run the output as a command itself (which is likely not what you wanted). Similarly on the other lines.
This happens to not have been problematic here though as chown and likely gcloud init don't return anything and so the resulting command line is empty.
Somehow the exec -l $SHELL did all the mess. I changed it to source ~/.bashrc and now it works.
Related
I have a script that will be executed as root, part way through the script I would like to switch to a user (say, bob) and execute another script using that user's environment. At the end of the script I want to switch back to root and execute more commands. I would like to run this script without having to enter the password for bob.
This script will be provided to my AWS EC2 instance via the user-data feature at first time bootup.
I thought the way to do this was to use either sudo or su. However, I don't appear to have access to bob's environment with either of these methods.
In the stdout echo below, you'll see that the environment variable myvar is initialized to Inara but when this script is executed with sudo, that value is unset....
dave#bugbear:~/workspaces/sandbox$ su --login bob
Password:
bob#bugbear:~$ cat bin/echo.sh
#!/bin/bash
echo "In echo.sh.. myvar is {$myvar}"
echo "Now executing the ruby script"
. ~/.bashrc
~/bin/echo.rb
bob#bugbear:~$ cat bin/echo.rb
#!/usr/bin/env ruby
puts "$myvar is: #{ENV['myvar']}"
bob#bugbear:~$ bin/echo.sh
In echo.sh.. myvar is {Inara}
Now executing the ruby script
$myvar is: Inara
bob#bugbear:~$ exit
logout
dave#bugbear:~/workspaces/sandbox$ cat test.sh
#!/bin/bash
stty echo
sudo --login -u bob bin/echo.sh
dave#bugbear:~/workspaces/sandbox$ ./test.sh
In echo.sh.. myvar is {}
Now executing the ruby script
$myvar is:
You are probably looking for one of these:
Simulate the -i initial environment of -u user bob:
sudo -i -u bob [command]
Or, use sudo to gain the required privilege to use su and ask it to - start a login shell as bob (without the bare - you're not doing that) and -c run a command:
sudo su - bob -c [command]
I have such code:
echo 'test'
usr=$USER
sudo sh -c "exec su $usr"
echo 'test1'
And for example echo 'test' is code where I configuring something that requirs me to relogin in new shell. But then where code is echo 'test1' I need to continue configuring using reloaded new shell.
Is there way to do that automatically? Like start new shell in parallel or something like that?
Ubuntu 14.04, bash.
Update:
For example, I need to install virsh but after installation it requires sudo to run. My script configurs groups adding $USER to the libvirtd group. Then I need to relogin. I can do that with sudo sh -c "exec su $usr". After that I need the script to continue execution. Is there way to do that?
Change with visudo your sudoers configuration so that you are no longer asked for a password. See man visudo and man sudoers.
I suggest to use a heredoc:
echo 'test'
usr="$USER"
sudo su - "$USER" << EOF
echo 'test1'
EOF
I have a bash script that partially needs to be running with default user rights, but there are some parts that involve using sudo (like copying stuff into system folders) I could just run the script with sudo ./script.sh, but that messes up all file access rights, if it involves creating or modifying files in the script.
So, how can I run script using sudo for some commands? Is it possible to ask for sudo password in the beginning (when the script just starts) but still run some lines of the script as a current user?
You could add this to the top of your script:
while ! echo "$PW" | sudo -S -v > /dev/null 2>&1; do
read -s -p "password: " PW
echo
done
That ensures the sudo credentials are cached for 5 minutes. Then you could run the commands that need sudo, and just those, with sudo in front.
Edit: Incorporating mklement0's suggestion from the comments, you can shorten this to:
sudo -v || exit
The original version, which I adapted from a Python snippet I have, might be useful if you want more control over the prompt or the retry logic/limit, but this shorter one is probably what works well for most cases.
Each line of your script is a command line. So, for the lines you want, you can simply put sudo in front of those lines of your script. For example:
#!/bin/sh
ls *.h
sudo cp *.h /usr/include/
echo "done" >>log
Obviously I'm just making stuff up. But, this shows that you can use sudo selectively as part of your script.
Just like using sudo interactively, you will be prompted for your user password if you haven't done so recently.
I have a shell script which needs non-root user account to run certain commands and then change the user to root to run the rest of the script. I am using SUSE11.
I have used expect to automate the password prompt. But when I use
spawn su -
and the command gets executed, the prompt comes back with root and the rest of the script does not execute.
Eg.
< non-root commands>
spawn su -
<root commands>
But after su - the prompt returns back with user as root.
How to execute the remaining of the script.
The sudo -S option does not help as it does not run sudo -S ifconfig command which I need to find the IP address of the machine.
I have already gone through these links but could not find a solution:
Change script directory to user's homedir in a shell script
Changing unix user in a shell script
sudo will work here but you need to change your script a little bit:
$ cat 1.sh
id
sudo -s <<EOF
echo Now i am root
id
echo "yes!"
EOF
$ bash 1.sh
uid=1000(igor) gid=1000(igor) groups=1000(igor),29(audio),44(video),124(fuse)
Now i am root
uid=0(root) gid=0(root) groups=0(root)
yes!
You need to run your command in <<EOF block and give the block to sudo.
If you want, you can use su, of course. But you need to run it using expect/pexpect that will enter password for you.
But even in case you could manage to enter the password automatically (or switch it off) this construction would not work:
user-command
su
root-command
In this case root-command will be executed with user, not with root privileges, because it will be executed after su will be finished (su opens a new shell, not changes uid of the current shell). You can use the same trick here of course:
su -c 'sh -s' <<EOF
# list of root commands
EOF
But now you have the same as with sudo.
There is an easy way to do it without a second script. Just put this at the start of your file:
if [ "$(whoami)" != "root" ]
then
sudo su -s "$0"
exit
fi
Then it will automatically run itself as root. Of course, this assumes that you can sudo su without having to provide a password - but that's out of scope of this answer; see one of the other questions about using sudo in shell scripts for how to do that.
Short version: create a block to enclose all commands to be run as root.
For example, I created a script to run a command from a root subdirectory, the segment goes like this:
sudo su - <<EOF
cd rootSubFolder/subfolder
./commandtoRun
EOF
Also, note that if you are changing to "root" user inside a shell script like below one, few Linux utilities like awk for data extraction or defining even a simple shell variable etc will behave weirdly.
To resolve this simply quote the whole document by using <<'EOF' in place of EOF.
sudo -i <<'EOF'
ls
echo "I am root now"
EOF
The easiest way to do that would be to create a least two scripts.
The first one should call the second one with root privileges. So every command you execute in the second script would be executed as root.
For example:
runasroot.sh
sudo su-c'./scriptname.sh'
scriptname.sh
apt-get install mysql-server-5.5
or whatever you need.
I'm trying to start unicorn_rails in a ruby script, and after executing many commands in the script, when the script gets to the following line
%x[bash -ic "bash <(. ~/.bashrc); cd /home/www-data/rails_app; bundle exec unicorn_rails -p 8000 -E production -c /home/www-data/rails_app/config/unicorn.rb -D"]
the script stops, generating the following output
[1]+ Stopped ./setup_rails.rb
and returns to the Linux prompt. If I type "fg", the script finishes running, the line where the script had stopped gets executed and unicorn gets started as a daemon.
If I run the line in a separate script, the script completes without stopping.
UPDATE_1 -
I source .bashrc because earlier in the script I install rvm and to get it to run with the correct environment I have the following:
%x[echo "[[ -s \"$HOME/.rvm/scripts/rvm\" ]] && source \"$HOME/.rvm/scripts/rvm\"" >> .bashrc]
%x[bash -ic "bash <(. ~/.bashrc); rvm install ruby-1.9.2-p290; rvm 1.9.2-p290 --default;"]
So if I want to run correct version of rvm, ruby and bundle I need to source .bashrc
end UPDATE_1
Does anyone have any idea what could cause a ruby script to halt as if control-Z was pressed?
Not sure why it's stopping, but my general rule of thumb is to never source my .bashrc in a script -- that might be the source of your problem right there, but I can't be sure without seeing what's in it. You should be able to change your script to something like:
$ vi setup_rails.sh
#!/usr/bin/bash
# EDIT from comments below
# expanding from a one liner to a better script...
$RVM_PATH=$HOME/.rvm/scripts
# install 1.9.2-p290 unless it's installed
$RVM_PATH/rvm info 1.9.2-p290 2&>1 >/dev/null || $RVM_SH install 1.9.2-p290
# run startup command inside rvm shell
$RVM_PATH/rvm-shell 1.9.2-p290 -c "cd /home/www-data/rails_app && bundle exec unicorn_rails -p 8000 -E production -c /home/www-data/rails_app/config/unicorn.rb -D"
This should give you the same result.