I've got a ruby cgi script which calls a shell script.
The shell script does a git pull.
When I run the shell script from the command prompt it works.
But when I run it from the ruby cgi script it executes the script but the git pull doesn't happen.
I'm guessing it's possibly permissions related but I can't quite work out how to fix it.
The ruby script is:
#!/usr/local/rvm/rubies/ruby-1.9.3-p125/bin/ruby
require "cgi"
git_pull = `sh /github/do_git_pull.sh`
move_apanels = `sh /github/move_apanels.sh`
puts "Content-type: text/html\n\n"
puts "<html><body>We've done the following:<ul>"
puts "<li>#{git_pull.to_s}</li>"
puts "<li>#{move_apanels.to_s}</li>"
puts "</ul></body></html>"
And the shell script is:
#!/bin/bash
sudo sh -c cd /github
sudo sh -c git pull origin master
echo "Git Pull Completed"
Both files have chmod 777
Any ideas?
Doing this:
sudo sh -c cd /github
only changes the PWD for the duration of that sh command. It does not affect the current shell. You need to cd and git pull in the same subshell:
sudo sh -c 'cd /github && git pull origin master'
Setting 777 on your scripts won't cut it. Try and find out the user under which your ruby script executes the shell script. Since git uses SSH keys for authentication and normally your SSH keys can only be used by you, then git pull would fail if another user tries to do the git pull.
Check out this question on how to run a shell script as a different user.
Also make sure that the PATH in the target environment is set properly and accessible (if you run the web server chrooted).
Related
I have a bash script on my test server that will export my wordpress db, rsync the db to the prod server, and git push all of my files to prod sever.
Within the prod server's git repo I have a git post-receive hook correctly configured.
#!/bin/bash
#Receive Git Push from Test
git --work-tree=/home/username/public_html --git-dir=/home/username/public_html/git/production-site.git checkout -f
Within the working tree directory (WordPress directory) on the prod server I also have a bash script that will import the newly uploaded db. /home/username/public_html/db-import-script.sh
#!/bin/bash
#bunch of commands
...
...
...
Question:
How can I automatically execute the db import script immediately following a git push?
troubleshooting:
inside of post-receive, I have tried using an absolute paths to execute the script, no luck
#!/bin/bash
#Receive Git Push from Test
git --work-tree=/home/username/public_html --git-dir=/home/username/public_html/git/production-site.git checkout -f
#execute script with absolute path
/home/username/public_html/db-import-script.sh
db-import-script.sh does not execute. NOTE: this script must remain located in the Wordpress directory b/c it uses wp-cli commands for various actions.
any tips?
I use e.g. gitea and on server one has to simply copy a script in post-receive.d/ folder. The post-receive hook (see below and you may use it as a template) will scan this folder and execute scripts in it.
#!/usr/bin/env bash
# AUTO GENERATED BY GITEA, DO NOT MODIFY
data=$(cat)
exitcodes=""
hookname=$(basename $0)
GIT_DIR=${GIT_DIR:-$(dirname $0)/..}
for hook in ${GIT_DIR}/hooks/${hookname}.d/*; do
test -x "${hook}" && test -f "${hook}" || continue
echo "${data}" | "${hook}"
exitcodes="${exitcodes} $?"
done
for i in ${exitcodes}; do
[ ${i} -eq 0 ] || exit ${i}
done
kiss rule... (keep it simple stupid)
rather than spending days trying to learn sysdig well enough to trace a process that I have never previously heard of and it subprocesses. (no offence intended Charles, just need to actually get tasks done. Your bash debug-log snippet highly useful)
and rather than creating some git / gitea hybrid (no offence #m19v, I did try your solution, but didn't work)
knowing that production server db-import.sh worked properly and that my test server git push / db upload push.sh worked properly.
My final solution was to leave the production server's post-receive properly configured and to.... remotely execute my db-import.sh script via ssh directly within the directory in which it needs to be executed.
In a nutshell, I added this to the end of push.sh script on my test server:
#Remotely execute db import
ssh -p22 -i /home/username/.ssh/id_rsa username#1233.456.789.12 'cd public_html && bash' << EOF
./db-import.sh
EOF
Bang problem solved...
My custom-made image ends with
ENTRYPOINT [ "/bin/bash", "-c", "/home/tool/entry_script.sh" ]
This is absolutely needed because at runtime, the first thing the user must do is to update an already cloned github project, and users will often forget to do it.
But then, when i try to launch using
docker run -it --rm my_image /bin/bash
i can see that the ENTRYPOINT script is being executed, but then the container exit.
I expect to have /bin/bash being executed and the shell to remain in interactive mode, due to -it flags.
What am I doing wrong?
UPDATE: I add my entry script
#!/bin/bash
echo "UPDATING GIT REPO";
cd /home/tool/cloned_github_tools_root
git pull
git submodule init
git submodule update
echo "Entrypoint ended";
Actually I've not kind of errors at runtime
When you set and entry point in a docker container. It is the only thing it will run. It's the one and only process that matters (PID 1). Once your entry_point.sh script finishes running and returns and exit code, docker thinks the container has done what it needed to do and exits, since the only process inside it exits.
If you want to launch a shell inside the container, you can modify your entry point script like so:
#!/bin/bash
echo "UPDATING GIT REPO";
cd /home/tool/cloned_github_tools_root
git pull
git submodule init
git submodule update
echo "Entrypoint ended";
/bin/bash "$#"
This starts a shell after the repo update has been done. The container will now exit when the user quits the shell.
The -i and -t flags will make sure the session gives you an stdin/stdout and will allocate a psuedo-tty for you, but they will not automatically run bash for you. Some containers don't even have bash in them.
I think the original question and answer are pretty good (thank you!). However I had the same exact problem but the provided solution did not work for me. I ended up wasting a lot of time figuring out what I was doing wrong. Hence I came up with a solution that should work all the time, if this could save time for others. In my docker entry point I'm sourcing a shell script file from Intel compiler and the received parameters $# are somewhat changed by the 'source' command. Then when ending the script with /bin/bash "$#" the original parameters are gone. Here is my updated version that would be safer for all use cases:
#!/bin/bash
# Save original parameters
allparams=("$#")
echo "UPDATING GIT REPO";
cd /home/tool/cloned_github_tools_root
git pull
git submodule init
git submodule update
echo "Entrypoint ended";
# Forward initial parameters
/bin/bash "${allparams[#]}"
I have a jenkins job, which has its own set of build servers. The process which i follow is building applications on the jenkins build server and then I use "send files or execute commands over ssh" to copy my build and deploy the same using a shell script.
As a part of the deployment commands, I have quite a few steps to be done, like mkdir, tar -xzvf etc.I want to execute these deployment steps with a specific user "K". But when i type the sudo su - k command, the jenkins job fails because i am unable to feed the password to it.
#!/bin/bash
sudo su - K << \EOF
cd /DIR1/DIR2;
cp ~/MY_APP.war .
mkdir DIR 3
tar -xzvf MY_APP.war
EOF
To handle that, I used a PASSWORD parameter and made the build as parameterized, so that i can use the same PASSWORD in the shell script.
I have tried to use Expect, but looks like commands like cd, tar -xzvf are not working inside it and even if they work they will not be executed with the K as a user since the terminal may expire(please correct if wrong).
export $PASSWORD
/usr/bin/expect << EOD
spawn sudo su - k
expect "password for K"
send -- "$PASSWORD"
cd /DIR1/DIR2;
cp ~/MY_APP.war .
mkdir DIR 3
tar -xzvf MY_APP.war
EOD
Note: I do not have the root access to the servers and hence cannot tweak the host key files. Is there a work around for this problem?
Even if you get it working, having passwords in scripts or on the command line probably is not ideal from a security standpoint. Two things I would suggest :
1) Use a public SSH key owned by the user on your initiating system as an authorized key on the remote system to allow logging as the intended user on the remote system without a password. You should have all you need to do that (no root access required, only to the users you already use on each system).
2) Set-up the "sudoers" file on the remote system so that the user you log in as is allowed to perform the commands you need as the required user. You would need the system administrator help for that.
Like so:
SUDO_PASSWORD=TheSudoPassword
...
ssh kilroy#somehost "echo $SUDO_PASSWORD | sudo -S some_root_command"
Later
How can i use this in the 1st snippet?
Write a file:
deploy.sh
#!/bin/sh
cd /DIR1/DIR2
cp ~/MY_APP.war .
mkdir DIR 3
tar -xzvf MY_APP.war
Then:
chmod +x deploy.sh
scp deploy.sh kilroy#somehost:~
ssh kilroy#somehost "echo $SUDO_PASSWORD | sudo -S ./deploy.sh"
I am using Git Bash, and I would like to write a script that processes the same set of commands for each directory (local repo) in my home directory. This would be easy enough in DOS, which most consider as handicapped at best, so I'm sure there's a way to do it in bash.
For example, some pseudo-code:
ls --directories-in-this-folder -> $repo_list
for each $folder in $repo_list do {
...my commmand set for each repo...
}
Does anyone know how to do this in Bash?
You can do that in bash (even on Windows, if you name your script git-xxx anywhere in your %PATH%)
#! /bin/bash
cd /your/git/repos/dir
for folder in $(ls -1); do
cd /your/git/repos/dir/$folder
# your git command
done
As mentioned in "Git Status Across Multiple Repositories on a Mac", you don't even have to cd into the git repo folder in order to execute a git command:
#! /bin/bash
cd /your/git/repos/dir
for folder in $(ls -1); do
worktree=/your/git/repos/dir/$folder
gitdir=$worktree/.git # for non-bare repos
# your git command
git --git-dir=$gitdir --work-tree=$worktree ...
done
As part of an intricate BASH script, I'd like to execute a command on a remote system from within the script itself.
Right now, I run the script which tailors files for the remote system and uploads them, then through a ssh login I execute a single command.
So for full marks:
How do I log into the remote system from the bash script (i.e. pass the credentials in non-interactively)?
How can I execute a command (specifically "chmod 755 /go && /go") from within the script?
Following Tim Post's answer:
Setup public keys and then you can do the following:
#!/bin/bash
ssh user#host "chmod 755 /go && /go"