gcloud compute ssh --command flag not working - bash

I am using google's compute ssh command from compute engine's vm1 to connect another project's vm2. The problem occur when i try to connect with --command flag. The shell command is not executed but ssh connection is established. However i can see the bash command in the processes of vm2 as pid =xxxx 'bash -c sudo su && service nginx stop && source /home/x/bin/activate && python example.py'
When i terminated the ssh command from vm1, the bash command immediately starts on vm2. I could not figure it out what cause this problem.
sudo gcloud compute ssh --project=project_name vm_name --zone=zone --command='sudo su && service nginx stop && source /home/x/bin/activate && python example.py'
OS: ubuntu 18.04

That command set won't work. You're approaching it as if you were the one running the commands inside a terminal, in which case:
sudo su (would get you a root shell and all subsequent commands would run as root)
service nginx stop (you're root)
source /home/x/bin/activate (you're root)
python example.py (you're root)
When you try to chain your commands with &&, it runs the next command after the first one worked and all the commands are actually being run as you:
sudo su (run as you, when this exits successfully, trigger next command)
service nginx stop (as you)
etc (as you)
So what ends up happening is you get a root shell and then nothing. Unless that exits (cleanly), you won't run the next command in the chain and so you're waiting, because the root shell is also waiting. As #DazWilkin mentioned above, what you should actually be doing is removing the sudo su (you don't need a root shell, you can't do anything in there anyway) and preface your other commands with sudo instead so that they are each run with elevated perms.

Related

why Jenkins shell script hangs when i run sudo pm2 ls

I confess I am total newbie to Jenkins.
I have
Jenkins-tls
installed on my Mac for experimentation.
I have a remote server that I testing with.
My Jenkins script is ultra simple.
ssh to the remote machine
sudo pm2 ls
the last command just hangs
I run the same 2 commands from the command line and it all works perfectly.
FYI, I need sudo for pm2 since I need to be root to run pm2, without sudo, I get access denied.
Any thoughts?
I believe you make the invalid assumption that jenkins somehow "types" commands after starting ssh to the remote session's command shell. This is not what happens. Instead, it will wait for the ssh command to finish, and only then execute the next command sudo pm2 ls. This never happens, because the ssh session never terminates. You observe this as a "hang".
How to solve this?
If there's only a small number of commands, you can use ssh to run them with
ssh user#remote sudo mp2 ls
ssh user#remote command arg1 arg2
If this gets longer, why not place all commands in a remote script and just run it with
ssh user#remote /path/to/script

Running Docker commands included in a shell script alongside other Linux commands and switching users

Using the Linux terminal, I run bash scripts (.sh files) containing sequences of commands I want to execute.
The issue is that I am unable to run a Docker command from within my shell script. I can run this Docker command when it's typed directly at the terminal with root privileges but not when I include it in the shell script file.
My script executed as a general user from command line, looks like this:
#!/usr/bin/env bash
cd /home/user/docker_backup
# remove /home/user/docker_backup/data
rm -rf data
# Switch to root privileges. my system is set to only run Docker as root
su
# Copy a folder from Docker container to host OS
docker cp <container-name>:/home/user/data /home/user/docker_backup
# More general user commands
cd ..
My code only runs until the su line above. After i enter the root password, nothing happens. if i type exit, i get permission errors, meaning the docker cp command failed.
**
This is my desired solution
**After thorough research, as I wanted to run my script as a general user, and only run certain commands as Root when necessary, I came up with a solution that works.
My script now looks like this (run with
$ sh script_name.sh):
#!/usr/bin/env bash
cd /home/user/docker_backup
# remove /home/user/docker_backup/data
rm -rf data
# Switch to root privileges. my system is set to only run Docker as root
su - root -c "docker cp <container-name>:/home/user/data /home/user/docker_backup"
# More general user commands
cd ..
Run shell script as general user. For commands that require root privileges, I use su - root -c "<command>". Terminal prompts for root password and executes command in quotes as root, then shell proceeds as general user.
Actually posting this as an answer:
You switch your current user to root during the script, but the script was executed by your own user.
So the docker cp command will also be executed as your own user, but you will be logged into the root account.
This results in you not seeing the output of docker cp (which might give you insight to not working - I think insufficient privilege).
A solution to this is either using sudo before docker cp, starting the script as root or adding your user to the group "docker", which authorizes your user to use the docker commands
I had the similar issue where the docker commands were running fine on the Terminal but the same commands were not running when I compiled them into a bash script and the issue was basically because of two reasons.
The docker commands need to be run with uplifted privileges that is with the sudo command ( Eg: sudo docker ps works but docker ps won't work). One could add the current user to docker group so that we need not use sudo with each docker command. Please visit this link and follow the section 2 to do the same.
Run the script in the correct way
One should have #! bin/bash at the starting of the script. It is a shebang that is required by each script.
One should save the file without .sh extension
One should provide the execution permission to the script by giving command chmod 777 script_name
run the script with bash script_name

"sudo su && somecommand" doesn't run somecommand

I have created a Jenkins job today, what it does is the Jenkins user should log into another server and run two commands seperated by &&:
ssh -i /creds/jenkins jenkins#servername.com "sh -c 'sudo su && df'"
The loging part works fine, then it runs the sudo su command and becomes root but it never runs the second command (i.e. df).
I even did this manually and from the Jenkins machine logged into the other server (servername). Then ran sh -c "sudo su && df" with no luck.
Can you please help?
Thanks in advance
If you are trying to run the df command as root, you should instead do sudo df.
This is because with sudo su && df, you are basically executing sudo su first and then df.
Also make sure, your jenkins user can be sudo without password.
The sudo su launches a second shell, and the command containing the && df is waiting to be executed in the non-root shell, just after the sudo su shell exits successfully.
This could be what you're looking for:
sh -c 'sudo su - root -c "df"'
Edit: please note that I don't normally use or advocate the use of sudo su - root -c type of constructions. However, I have seen rare cases in which a program doesn't work properly when called via sudo/gksudo, but does work properly when called via su/gksu -- in such cases, a given user should try to use sudo -i first, and if that does not work, one might have to resort to sudo su - root -c or similar, as a workaround of sorts to deal with a "misbehaving" program. Since the OP used some similar syntax on his post, I assumed that his case could be such a workaround case, so I maintained the sudo su - root -c type of structure on my answer.
when you did sudo su && df , sudo su will start a child process immediately without waiting for the && df part of the command to execute , when you hit Ctrl + D it exits the child process and enters the parent shell , that's when your && df will execute. You should do this using here strings, it might not be the best option but it works and it does not start a new child process
sh -c "sudo su" <<<df
note: don't surround <<< df with any quotes

Why can't I execute systemctl commands as superuser?

I wrote a script to download and install kubernetes on an ubuntu machine.
The last part of the script would be to start the kubelet service.
echo "Initializing the master node"
kubeadm reset
systemctl start kubelet.service
kubeadm init
I am forcing the user to run the script as root user. However, when the script reaches the systemctl command, it is not able to execute it. Moreover, I tried to execute the command manually as the root user. I was not able to do so. However, I am able to execute it as a regular user.
Does anyone know why? Is there a workaround?
A possible workaround is to start the service as a regular user, even though the script runs as root. First, you need to find out who is the "original" user:
originalUser="$(logname 2>/dev/null)"
and then call the service as this user:
su - "$originalUser" -c "systemctl start kubelet.service"
Maybe that specific service is dependent on being run by an user who is not root (some programs test for that).

Bash dropping out of sudo in a script

I need to execute an install script using sudo, but towards the end of the script, the script needs to drop out of sudo and continue as the regular user.
Example:
sudo ./install.sh
script runs and does what it needs to as root
su myscriptuser
service myscript start
Basically, the service myscript start needs to be run by the regular user, not by root.
su myscriptuser starts another shell in the name of myscriptuser and waits until it exits. Then it proceeds to run service myscript start in the name of root again.
What you need instead of the last 2 commands is sudo:
sudo -u myscriptuser service myscript start

Resources