I am using CD for deploying my code to a VPS. This VPS is running ubuntu 16.04 and has a user 'deployer'.
Now when I use ssh deployer#server I get shell access to the server and then when using cd /var/www I get into the /var/www directory.
When I do this from the deployment script, defined in .gitlab-ci.yml I get this error /bin/bash: line 101: cd: /var/www/data/: No such file or directory. I also did ls -al to view the directory structure of /var which turned out not to contain the www directory. So clearly now I have no permission to the www directory.
- rsync -avz --exclude=.env . deployer#devvers.work:/var/www/data/staging/home
- ssh deployer#devvers.work
- cd /var
- ls -al
- cd /var/www
Tthis is the part of the script where it fails. Does anyone know why my user has different permissions when using ssh from the terminal then when using ssh in this script? Coping the files with rsync when fine and all the files were copied.
My guess is that the cd and ls commands that you are trying are actually executed in the runner's environment (be it the host or a docker container, depending on your setup), not on the machine you ssh into.
I'd suggest you rather execute those commands with ssh. An example of creating a file and checking that it has been created:
ssh deployer#devvers.work "touch /var/www/test_file && ls -al /var/www/"
It is best to use an ssh executor, configured through a config.toml:
/etc/gitlab-runner/config.toml:
concurrent = 1
[[runners]]
url = "http://your_gitlab/ci"
token = "xxx..."
name = "yourGitLabCI"
executor = "ssh"
[runners.ssh]
user = "deployer"
host = "devvers.work"
port = "22"
identity_file = "/home/user/.ssh/id_rsa"
Then you .gitlab.yml can simply include
job:
script:
- "ls /var/www"
- "cd /var/www"
...
See also this example.
If you encounter the line 101: cd: issue on a gitlab-runner that is configured as a shell executor there might actually be a .bash_logout file in the gitlab-runner users home directory that causes the issue together with https://gitlab.com/gitlab-org/gitlab-runner/issues/3849
Related
There is a script located in following path
/usr/local/bin/subrun
The Owner & usergroup of above file is root
When I run above script locally using following command in BASH shell:
/bin/sh /usr/local/bin/subrun
It runs perfectly fine
But When I try to run same script remotely using following command in BASH shell:
ssh user#host /usr/local/bin/subrun
It throws an error :
/usr/local/bin/subrun: Command not found.
Question : How do I resolve this ? Does this has to do with 'root' (Owner & Usergroup of script)
PS: Also there is another script in the same location with different Owner & usergroup (for e.g. Owner : manager & Usergroup : admin). This script can be run locally or remotely without any issue
PS2: 'subrun' script file has following levels of permission '-rwxr-xr-x' (And I am not allowed to change permission using chmod. It says Operation not permitted
Since you run it locally as:
/bin/sh /usr/local/bin/subrun
rather than just:
/usr/local/bin/subrun
it's probaby not an executable file on either machine and so you should do the same when trying to run it remotely and use:
ssh user#host '/bin/sh /usr/local/bin/subrun'
instead of
ssh user#host /usr/local/bin/subrun
or make it executable on every machine by running chmod oug+x /usr/local/bin/subrun or similar on every machine and THEN you can call it as just /usr/local/bin/subrun on every machine.
Using the Linux terminal, I run bash scripts (.sh files) containing sequences of commands I want to execute.
The issue is that I am unable to run a Docker command from within my shell script. I can run this Docker command when it's typed directly at the terminal with root privileges but not when I include it in the shell script file.
My script executed as a general user from command line, looks like this:
#!/usr/bin/env bash
cd /home/user/docker_backup
# remove /home/user/docker_backup/data
rm -rf data
# Switch to root privileges. my system is set to only run Docker as root
su
# Copy a folder from Docker container to host OS
docker cp <container-name>:/home/user/data /home/user/docker_backup
# More general user commands
cd ..
My code only runs until the su line above. After i enter the root password, nothing happens. if i type exit, i get permission errors, meaning the docker cp command failed.
**
This is my desired solution
**After thorough research, as I wanted to run my script as a general user, and only run certain commands as Root when necessary, I came up with a solution that works.
My script now looks like this (run with
$ sh script_name.sh):
#!/usr/bin/env bash
cd /home/user/docker_backup
# remove /home/user/docker_backup/data
rm -rf data
# Switch to root privileges. my system is set to only run Docker as root
su - root -c "docker cp <container-name>:/home/user/data /home/user/docker_backup"
# More general user commands
cd ..
Run shell script as general user. For commands that require root privileges, I use su - root -c "<command>". Terminal prompts for root password and executes command in quotes as root, then shell proceeds as general user.
Actually posting this as an answer:
You switch your current user to root during the script, but the script was executed by your own user.
So the docker cp command will also be executed as your own user, but you will be logged into the root account.
This results in you not seeing the output of docker cp (which might give you insight to not working - I think insufficient privilege).
A solution to this is either using sudo before docker cp, starting the script as root or adding your user to the group "docker", which authorizes your user to use the docker commands
I had the similar issue where the docker commands were running fine on the Terminal but the same commands were not running when I compiled them into a bash script and the issue was basically because of two reasons.
The docker commands need to be run with uplifted privileges that is with the sudo command ( Eg: sudo docker ps works but docker ps won't work). One could add the current user to docker group so that we need not use sudo with each docker command. Please visit this link and follow the section 2 to do the same.
Run the script in the correct way
One should have #! bin/bash at the starting of the script. It is a shebang that is required by each script.
One should save the file without .sh extension
One should provide the execution permission to the script by giving command chmod 777 script_name
run the script with bash script_name
I'm trying to use a vagrant file I received to set up a VM in Ubuntu with virtualbox.
After using the vagrant up command I get the following error:
File provisioner:
* File upload source file /home/c-server/tools/appDeploy.sh must exist
appDeploy.sh does exist in the correct location and looks like this:
#!/bin/bash
#
# Update the app server
#
/usr/local/bin/aws s3 cp s3://dev-build-ci-server/deploy.zip /tmp/.
cd /tmp
unzip -o deploy.zip vagrant/tools/deploy.sh
cp -f vagrant/tools/deploy.sh /tmp/.
rm -rf vagrant
chmod +x /tmp/deploy.sh
dos2unix /tmp/deploy.sh
./deploy.sh
rm -rf ./deploy.sh ./deploy.zip
#
sudo /etc/init.d/supervisor stop
sudo /etc/init.d/supervisor start
#
Since the script exists in the correct location, I'm assuming it's looking for something else (maybe something that should exist on my local computer). What that is, I am not sure.
I did some research into what the file provisioner is and what it does but I cannot find an answer to get me past this error.
It may very well be important that this vagrant file will work correctly on Windows 10, but I need to get it working on Ubuntu.
In your Vagrantfile, check that the filenames are capitalized correctly. Windows isn't case-sensitive but Ubuntu is.
I have a jenkins job, which has its own set of build servers. The process which i follow is building applications on the jenkins build server and then I use "send files or execute commands over ssh" to copy my build and deploy the same using a shell script.
As a part of the deployment commands, I have quite a few steps to be done, like mkdir, tar -xzvf etc.I want to execute these deployment steps with a specific user "K". But when i type the sudo su - k command, the jenkins job fails because i am unable to feed the password to it.
#!/bin/bash
sudo su - K << \EOF
cd /DIR1/DIR2;
cp ~/MY_APP.war .
mkdir DIR 3
tar -xzvf MY_APP.war
EOF
To handle that, I used a PASSWORD parameter and made the build as parameterized, so that i can use the same PASSWORD in the shell script.
I have tried to use Expect, but looks like commands like cd, tar -xzvf are not working inside it and even if they work they will not be executed with the K as a user since the terminal may expire(please correct if wrong).
export $PASSWORD
/usr/bin/expect << EOD
spawn sudo su - k
expect "password for K"
send -- "$PASSWORD"
cd /DIR1/DIR2;
cp ~/MY_APP.war .
mkdir DIR 3
tar -xzvf MY_APP.war
EOD
Note: I do not have the root access to the servers and hence cannot tweak the host key files. Is there a work around for this problem?
Even if you get it working, having passwords in scripts or on the command line probably is not ideal from a security standpoint. Two things I would suggest :
1) Use a public SSH key owned by the user on your initiating system as an authorized key on the remote system to allow logging as the intended user on the remote system without a password. You should have all you need to do that (no root access required, only to the users you already use on each system).
2) Set-up the "sudoers" file on the remote system so that the user you log in as is allowed to perform the commands you need as the required user. You would need the system administrator help for that.
Like so:
SUDO_PASSWORD=TheSudoPassword
...
ssh kilroy#somehost "echo $SUDO_PASSWORD | sudo -S some_root_command"
Later
How can i use this in the 1st snippet?
Write a file:
deploy.sh
#!/bin/sh
cd /DIR1/DIR2
cp ~/MY_APP.war .
mkdir DIR 3
tar -xzvf MY_APP.war
Then:
chmod +x deploy.sh
scp deploy.sh kilroy#somehost:~
ssh kilroy#somehost "echo $SUDO_PASSWORD | sudo -S ./deploy.sh"
I would like to write a shell script that sets up a mercurial repository, and allow all users in the group
"developers" to execute this script.
The script is owned by the user "hg", and works fine when ran. The problem comes when I try to run it
with another user, using sudo, the execution halts with a "permission denied" error, when it tries to source another file.
The script file in question:
create_repo.sh
#!/bin/bash
source colors.sh
REPOROOT="/srv/repository/mercurial/"
... rest of the script ....
Permissions of create_repo.sh, and colors.sh:
-rwxr--r-- 1 hg hg 551 2011-01-07 10:20 colors.sh
-rwxr--r-- 1 hg hg 1137 2011-01-07 11:08 create_repo.sh
Sudoers setup:
%developer ALL = (hg) NOPASSWD: /home/hg/scripts/create_repo.sh
What I'm trying to run:
user#nebu:~$ id
uid=1000(user) gid=1000(user) groups=4(adm),20(dialout),24(cdrom),46(plugdev),105(lpadmin),113(sambashare),116(admin),1000(user),1001(developer)
user#nebu:~$ sudo -l
Matching Defaults entries for user on this host:
env_reset
User user may run the following commands on this host:
(ALL) ALL
(hg) NOPASSWD: /home/hg/scripts/create_repo.sh
user#nebu:~$ sudo -u hg /home/hg/scripts/create_repo.sh
/home/hg/scripts/create_repo.sh: line 3: colors.sh: Permission denied
So the script is executed, but halts when it tries to include the other script.
I have also tried using:
user#nebu:~$ sudo -u hg /bin/bash /home/hg/scripts/create_repo.sh
Which gives the same result.
What is the correct way to include another shell script, if the script may be ran with a different user, through sudo?
What is probably happening is that the script tries to source the file colors.sh in the current directory and fails because it doesn't have permission to read your current directory because of sudo.
Try using source /home/hg/scripts/colors.sh.