I'm trying to use Jenkins to create a special git repository. I created a free-style project that just executes a shell script. When I execute this script by hand, without Jenkins, it works just fine.
From Jenkins, however, it behaves quite differently.
# this will remove all subtrees
git log | grep git-subtree-dir | tr -d ' ' | cut -d ":" -f2 | sort | uniq | xargs -I {} bash -c 'if [ -d $(git rev-parse --show-toplevel)/{} ] ; then rm -rf {}; fi'
rm -rf .git
If this part is executed by Jenkins, in console output I see this kind of errors:
rm: cannot remove '.git/objects/pack/pack-022eb85d38a41e66ad3f43a5f28809a5a3ee4a0f.pack': Device or resource busy
rm: cannot remove '.git/objects/pack/pack-05630eb059838f149ad30483bd48d37f9a629c70.pack': Device or resource busy
rm: cannot remove '.git/objects/pack/pack-26f510b5a2d15ba9372cf0a89628d743811e3bb2.pack': Device or resource busy
rm: cannot remove '.git/objects/pack/pack-33d276d82226c201eedd419e5fd24b6b906d4c03.pack': Device or resource busy
I modified this part of the script like this:
while true
do
if rm -rf .git ; then
break
else
continue
fi
done
But this doesn't help. In the task manager I see a git process that just doesn't terminate.
I conjured said script by a lot of googling and I do not understand very good what's going on.
Jenkins runs on Windows Server 2012 behind IIS; shell scripts are executed by bash shipped with git for Windows.
1/ Ensure your path is correct, and no quote/double quote escaping occurs in the process of jenkins job starting.
2/ Your command line is a bit too handy to be correctly and safely evaluated.
Put your commands in a regular script, starting with #!/bin/bash instead of thru the command line.
xargs -I {} bash -c 'if [ -d $(git rev-parse --show-toplevel)/{} ] ; then rm -rf {}; fi'
becames
xargs -I {} /path/myscript.sh {}
with
#!/bin/bash
rev-parse="$(git rev-parse --show-toplevel)"
wait
if [ -d ${rev-parse}/${1} ] ; then
rm -rf ${1}
fi
Please note that your script is really unsafe, as you rm -rf a parameter without even evaluate it before… !
3/ You can add a wait between the git and the rm to wait for the end of the git process
4/ log your git command into a log file, with a redirection >> /tmp/git-jenkins-log
5/ put all of those commands in a script (see #2)
Following is an infinite loop in case rm -rf fail
while true
do
if rm -rf .git ; then
break
else
continue
fi
done
indeed continue can be used in for or while loop to get the next entry but in this while loop it will run the same rm command forever.
Well, aparrently I was able to fix my issue by running the script from different user.
By default on Windows Jenkins executes all jobs from the user SYSTEM. I have no idea why it affects the behaviour of my script but running it with psexec from specially created user account worked.
In case anyoune is interested, I did something like this:
psexec -accepteula -h -user Jenkins -p _password_ "full/path/to/bash.exe" full/path/to/script.sh
Related
I created a test directory on a remote server. To simulate this command, I created 6 files inside this test directory. The expected behavior of the command is to only keep 5 recent files in the directory. Good news is this command works!
My only trouble is I cannot execute the same command remotely. Only difference is the "" notation for the remote execution:
ssh account#someremoteserver.com "rm -rf `ls -t /usr/local/testingCommands | awk 'NR>5'`"
The reason for this is.. I have a Jenkins CI server that needs to remotely clean up the remote server, only keeping 5 most recent files.
Any help is greatly appreciated. Thanks!
ssh account#someremoteserver.com '/usr/bin/ls -1Qt /usr/local/testingCommands | awk "NR > 5" | /usr/bin/xargs /usr/bin/rm -rf'
Remarks:
Use absolute directory for all commands (security reasons)
rm ... $(...) really too dangerous, so invert rm and ls commands:
use ls command with one file per line option (-1), quoted filenames (-Q) for filenames with spaces
In BASH, the following command removes everything in a directory except one file:
rm -rf !(filename.txt)
However, in SSH the same command changes nothing in the directory and it returns the following error: -jailshell: !: event not found
So, I escaped the ! with \, (the parentheses also require escaping) but it still doesn't work:
rm -rf \!\(filename.txt\)
It returns no error and nothing in the directory changed.
Is it even possible to run this command in SSH? I found a workaround but if this is possible it would expedite things considerably.
I connect to the ssh server using the alias below:
alias devssh="ssh -p 2222 -i ~/.ssh/private_key user#host"
!(filename.txt) is an extglob, a bash future that might have to be enabled. Make sure that your ssh server runs bash and that extglob is enabled:
ssh user#host "bash -O extglob -c 'rm -rf !(filename.txt)'"
Or by using your alias:
devssh "bash -O extglob -c 'rm -rf !(filename.txt)'"
If you are sure that the remote system uses bash by default, you can also drop the bash -c part. But your error message indicates that the ssh server runs jailshell.
ssh user#host 'shopt -s extglob; rm -rf !(filename.txt)'
devssh 'shopt -s extglob; rm -rf !(filename.txt)'
I wouldn't do it that way. I wouldn't rely on bash being on the remote, and I wouldn't rely on any bashisms. I would use:
$ ssh user#host 'rm $(ls | grep -v "^filename.txt$")'
If I wanted to protect against the possibility that the directory might be empty, I'd assign the output of $(...) to a variable, and test it for emptiness. If I was concerned the command might get too long, I'd write the names to a file, and send the grep output to rm with xargs.
If it got too elaborate, I'd copy a script to the remote and execute it.
Even though all my steps pass successfully , Gitlab CI shows this -
"Cleaning up file based variables
00:01
ERROR: Job failed: exit code 1"
and fails the job at the very end . Also interestingly , this only happens for my master branch . It runs successfully on other branches. Has anyone faced this issue and found a resolution ?
- >
for dir in $(git log -m -1 --name-only -r --pretty="format:" "$CI_COMMIT_SHA"); do
if [[ -f "$dir" ]]; then
SERVICE=$(echo "$dir")
# helm install the service
fi
done
- echo "deployed"
Overview
This drove me crazy and I'm still not sure what the appropriate answer is. I just ran into this issue myself and sunk hours into this issue. I think GitLab messed something up with command substitution (shows a new release yesterday), although I could be wrong about the issue or its timing. It also seems to only occur for some command substitutions and not others, I initially suspected it might be related to outputting to /dev/null, but wasn't going to dive too deep. It always failed immediately after the command substitution was initiated.
My code
I had code similar to yours (reduced version below), tried manipulating it multiple ways, but each use of command substitution yielded the same failure message:
Cleaning up file based variables 00:01
ERROR: Job failed: exit code 1
Attempts I've made include the following:
- folders=$(find .[^.]* * -type d -maxdepth 0 -exec echo {} \; 2>/dev/null)
- >
while read folder; do
echo "$folder"
done <<< "$folders"
And ...
- >
while read folder; do
echo "$folder"
done <<< $(find .[^.]* * -type d -maxdepth 0 -exec echo {} \; 2>/dev/null)
Both those versions succeeded on my local machine, but failed in GitLab (I might have typos in above - please don't scrutinize, it's reduced version of my actual program).
How I fixed it
Rather than using command substitution $(...), I instead opted for process substitution <(...) and it seems to be working without issue.
- >
while read folder; do
echo "$folder"
done < <(find .[^.]* * -type d -maxdepth 0 -exec echo {} \; 2>/dev/null)
I would try to substitute the same in your code if possible:
- >
while read dir; do
# the rest goes here
done < <(git log -m -1 --name-only -r --pretty="format:" "$CI_COMMIT_SHA")
The issue might also be the line inside the if statement (the echo), you can replace that with the following:
read SERVICE < <(echo "$dir")
Again, not exactly sure this will fix the issue for you as I'm still unsure what the cause is, but it resolved my issue. Best of luck.
The error seemed to vanish for me once I removed the script from .gitlab-ci.yml file to another scipt.sh file and called the script.sh file in the gitlab yaml.
We have run into the same issue in GitLab v13.3.6-ee with the following line of the script that we are using for the open new merge request:
COUNTBRANCHES=`echo ${LISTMR} | grep -o "\"source_branch\":\"${CI_COMMIT_REF_NAME}\"" | wc -l`;
and as #ctwheels stated, changing that line into this:
read COUNTBRANCHES < <(echo ${LISTMR} | grep -o "\"source_branch\":\"${CI_COMMIT_REF_NAME}\"" | wc -l);
solved our problem.
I had this error when tried to use protected CI/CD variable.
in my case, my script ended with a curl command to an URL that would return a 403 Forbidden and probably hang up
curl -s "$ENV_URL/hello" | grep "hello world"
... if that helps anyone :-)
It was a very specific use case for me (.NetCore), but it will eventually help someone.
In my case, no error was written in the logs and the tests where executed successfully but the job failed with the exist message showed in the question.
I was referencing xunit in my source project (not only in my test project) and I don't know why that causes the ci job to fail (but worked locally showing only a warning : Unable to find testhost.dll. Please publish your test project and retry).
Deleting xunit from my source project (not the test project) resolved the issue.
In my case I just had a conditionnal command, and if the last condition is false, then gitlab thinks the script has errored (even if it's not the case, because it uses the last line as the return value)
This is what my script looked like, and it would error if the project is using yarn but not npm
[ -f yarn.lock ] && yarn install --frozen-lockfile --cache .npm && yarn prod
[ ! -f yarn.lock ] && npm ci --prefer-offline --cache .npm && npm run prod --cache .npm
So the solution is just to make sure the last line returns true
[ -f yarn.lock ] && yarn install --frozen-lockfile --cache .npm && yarn prod
[ ! -f yarn.lock ] && npm ci --prefer-offline --cache .npm && npm run prod --cache .npm
true
In shell, I want to check if a file exists or not then create if it doesn't exist or delete if it exists. For this I need a one liner and am trying to do something like:
ls | awk '\filename\' <if exist delete else create>
I need the ls as my problem has some command that outputs a list of strings that need to be pipelined to awk then possibly touch/mkdir.
#!/bin/bash
if [ -z "$1" ] || [ ! -f "$1" ] # $1 is input filename and -f check if $1 is a regular file
then
rm "$1" #delete the file
else
touch "$1" #create the file
fi
save the file as filecreator.sh
change the permission to allow execution with sudo chmod a+rx
while running the script use ./filecreator.sh yourfile.extension
You can see the file in your directory.
Using oc projects and oc new-project instad of ls and touch as indicated in a comment.
oc projects |
while read -r proj; do
if [ -d "$proj" ]; then
rm -rf "$proj"
else
oc new-project "$proj"
fi
done
I don't think there is a useful way to write this as a one-liner. If you like, you can replace the newlines with semicolons, except after then and else.
You really should put your actual requirements in the question itself. ls is a superbly useless example because it cannot list a file which doesn't already exist, and you should not use ls in scripts at all.
rm yourfile 2>/dev/null || touch yourfile
If the file existed before, rm will succeed and erase the file, and the touch won't be executed. You end up with no file afterwards.
If the file did not exist before, rm will fail (but the error message is not visible, since it is directed to the bitbucket), and due to the non-zero exit code of rm, the touch will be executed. You end up with an empty file afterwards.
Caveat: If the file exists, but you don't have permissions to remove it, you won't notice this error, due to the redirection of stderr. Hence, for debugging and later diagnosis, it might be better to redirect stderr to some file instead.
I've looked around for an answer to this one but couldn't find one.
I have written a simple script that does initial server settings and I'd like it to remove/unlink itself from the root directory on completion. I've tried a number of solutions i googled ( for example /bin/rm $test.sh) but the script always seems to remain in place. Is this possible? Below is my script so far.
#! /bin/bash
cd /root/
wget -r -nH -np --cut-dirs=1 http://myhost.com/install/scripts/
rm -f index.html* *.gif */index.html* */*.gif robots.txt
ls -al /root/
if [ -d /usr/local/psa ]
then
echo plesk > /root/bin/INST_SERVER_TYPE.txt
chmod 775 /root/bin/*
/root/bin/setting_server_ve.sh
rm -rf /root/etc | rm -rf /root/bin | rm -rf /root/log | rm -rf /root/old
sed -i "75s/false/true/" /etc/permissions/jail.conf
exit 1;
elif [ -d /var/webmin ]
then
echo webmin > /root/bin/INST_SERVER_TYPE.txt
chmod 775 /root/bin/*
/root/bin/setting_server_ve.sh
rm -rf /root/etc | rm -rf /root/bin | rm -rf /root/log | rm -rf /root/old
sed -i "67s/false/true/" /etc/permissions/jail.conf
break
exit 1;
else
echo no-gui > /root/bin/INST_SERVER_TYPE.txt
chmod 775 /root/bin/*
/root/bin/setting_server_ve.sh
rm -rf /root/etc | rm -rf /root/bin | rm -rf /root/log | rm -rf /root/old
sed -i "67s/false/true/" /etc/permissions/jail.conf
break
exit 1;
fi
rm -- "$0"
Ought to do the trick. $0 is a magic variable for the full path of the executed script.
This works for me:
#!/bin/sh
rm test.sh
Maybe you didn't really mean to have the '$' in '$test.sh'?
The script can delete itself via the shred command (as a secure deletion) when it exits.
#!/bin/bash
currentscript="$0"
# Function that is called when the script exits:
function finish {
echo "Securely shredding ${currentscript}"; shred -u ${currentscript};
}
# Do your bashing here...
# When your script is finished, exit with a call to the function, "finish":
trap finish EXIT
The simplest one:
#!/path/to/rm
Usage: ./path/to/the/script/above
Note: /path/to/rm must not have blank characters at all.
I wrote a small script that adds a grace period to a self deleting script based on
user742030's answer https://stackoverflow.com/a/34303677/10772577.
function selfShred {
SHREDDING_GRACE_SECONDS=${SHREDDING_GRACE_SECONDS:-5}
if (( $SHREDDING_GRACE_SECONDS > 0 )); then
echo -e "Shreding ${0} in $SHREDDING_GRACE_SECONDS seconds \e[1;31mCTRL-C TO KEEP FILE\e[0m"
BOMB="●"
FUZE='~'
SPARK="\e[1;31m*\e[0m"
SLEEP_LEFT=$SHREDDING_GRACE_SECONDS
while (( $SLEEP_LEFT > 0 )); do
LINE="$BOMB"
for (( j=0; j < $SLEEP_LEFT - 1; j++ )); do
LINE+="$FUZE"
done
LINE+="$SPARK"
echo -en $LINE "\r"
sleep 1
(( SLEEP_LEFT-- ))
done
fi
shred -u "${0}"
}
trap selfShred EXIT
See the repo here: https://github.com/reedHam/self-shred
$0 may not contain the script's name/path in certain circumstances. Please check the following: https://stackoverflow.com/a/35006505/5113030 (Choosing between $0 and BASH_SOURCE...)
The following script should work as expected in these cases:
source script.sh - the script is sourced;
./script.sh - executed interactively;
/bin/bash -- script.sh - passed as an argument to a shell program.
#!/usr/bin/env bash
# ...
rm -- "$( readlink -f -- "${BASH_SOURCE[0]:-$0}" 2> '/dev/null'; )";
Please check the following regarding shell script source reading and execution since it may affect the behavior when a script is deleted while running: https://unix.stackexchange.com/a/121025/133353 (How Does Linux deal with shell scripts?...)
Related: https://stackoverflow.com/a/246128/5113030 (How can I get the source directory of a Bash script from...)
Just add to the end:
rm -- "$0"
Why remove the script at all? As other have mentioned it means you have to keep a copy elsewhere.
A suggestion is to use a "firstboot" like approach. Simply create an empty file in e.g. /etc/sysconfig that triggers the execution of this script if it is present. Then remove that file at the end of the script.
Modify the script so it has the necessary chkconfig headers and place it in /etc/init.d/ so it is run at every boot.
That way you can rerun the script at a later time simply by recreating the trigger script.
Hope this helps.