How to check file exists via Bash script? - bash

I'm trying to clone a repo and test it after is done via bash script. I have written my test code based on Bash Shell: Check File Exists or Not.
#!/bin/bash
echo "*** TRY TO INIT INFER ***"
# Clone Infer
INFER_GIT_PATH="https://github.com/facebook/infer.git"
echo "> Try to Clone Infer from ${INFER_GIT_PATH}"
git clone ${INFER_GIT_PATH}
INFER_PATH="/infer/infer/bin/infer"
[ -e ${INFER_PATH} ] && echo "Infer downloaded successfully" || echo "Something went wrong :("
Although repo can be downloaded successfully and /infer/infer/bin/infer.sh exists, I'm always getting Something went wrong :( message.

Change it to this (use a relative path):
INFER_PATH="./infer/infer/bin/infer"
[ -e ${INFER_PATH} ] && echo "Infer downloaded successfully" || echo "Something went wrong :("
and it should work.

If you want to know if a file exist, you can use -f flag:
[ -f /infer/infer/bin/infer ] && echo "Infer downloaded successfully" || echo "Something went wrong :("

Related

Bash - check if repository exists

I am trying to create if statement which will check if repository with name X exists, if it doesn't => create it.
Made following code. It works, but when repository doesn't exists, then it shows error. I couldn't find any ways of removing that error in console. Make I was using &>/dev/null not in correct way...
myStr=$(git ls-remote https://github.com/user/repository);
if [ -z $myStr ]
then
echo "OMG IT WORKED"
fi
As soon as you completely silence git ls-remote I will suggest to check the exit code of the command ($?) rather than its output.
Based on your code you could consider a function in this way:
check_repo_exists() {
repoUrl="$1"
myStr="$(git ls-remote -q "$repoUrl" &> /dev/null)";
if [[ "$?" -eq 0 ]]
then
echo "REPO EXISTS"
else
echo "REPO DOES NOT EXIST"
fi
}
check_repo_exists "https://github.com/kubernetes"
# REPO DOES NOT EXIST
check_repo_exists "https://github.com/kubernetes/kubectl"
# REPO EXISTS

Testing server setup bash scripts

I'm just learning to write bash scripts.
I'm writing a script to setup a new server.
How should I go about testing the script.
i.e.
I use apt install for certain packages like apache, php etc. and then a couple of lines down there is an error.
I then need to fix the error and run it again but it will run all the install commands again.
The system will probably say the package is installed already, but what if there are commands which append strings to files.
If these are run again it will append the same string to the file a second time.
What is the best approach to write bash-scripts like this?
Can you do test runs which rollback everything after an error or end of the script?
Or even better to have the script continue from the line where the error occured the next time it is run?
I'm doing this on an Ubuntu 18.04 server.
it's a matter of how clear you want it to be to read it, but
[ -f .step01-done ] || your install command && touch .step01-done
[ -f .step02-done ] || your other install command && touch .step02-done
maybe a little easier to read:
if ! [ -f .step01-done ]; then
if your install command ; then
touch .step01-done
fi
fi
if ! [ -f .step02-done ]; then
if your other install command ; then
touch .step02-done
fi
fi
...or something in between.
Now, I would suggest creating a directory somewhere and maybe logging output from the commands to some file there (maybe tee it) but definitely putting all these files you are creating with touch there. That way if you start it from another directory by accident, it won't matter. You just need to make sure that apt-get or whatever you use actual returns false if it fails. It should.
You could even make a function that does it in a nice way...
#!/bin/bash
function do_cmd() {
if [ -f "$1.done" ]; then
echo "$2: skipping already completed step"
return 0
fi
echo -n "$2: "
$3 1> "$1.out" 2> "$1.err"
if $?; then
echo "ok"
touch "$1.done"
return 0
else
echo "failed"
echo -e "see \"$1.out\" and/or \"$1.err\" for details."
return 1
# could "exit 1" instead
fi
}
[ -d /root/mysetup ] || mkdir /root/mysetup
if ! [ -d /root/mysetup ]; then
echo "failed to find or create /root/mysetup directory
exit 1
fi
cd /root/mysetup
# ---------------- your steps go here -------------------
do_cmd prog1 "installing prog1" "apt-get install prog1" || exit 1
do_cmd prog2 "installing prog2" "apt-get install prog2" || exit 1
do_cmd startfoo "starting foo service" "service foo start" || exit 1
echo "all setup functions finished."
You would use:
do_cmd identifier "description" "command or function"
description
identifier: unique identifier used when files are generated:
identifier.out: standard output from command
identifier.err: standard error from command
identifier.done: created when command is successful
description: this is actually printed to the terminal when the step is being executed.
command or function: this is the actual command to run
not sure why stackoverflow forced me to format that last bit as code but w/e

Git apply ignoring failed patches

I have a bash script that applies all git patches in a directory (See bottom for the script). This script is run everytime I deploy my website on my server.
I'm now running into an issue where after a few weeks the patch throws an error and exits out the script with error "patch does not apply". Does anyone know if there is a way to ignore broken/old patches and possible just show an error that the script no longer works rather than completely exit out the script causing my website deployment to fail?
for file in ${PROJECT_PATH}/${PATCH_DIR}/*.patch; do
if [[ -e ${file} ]]; then
echo -n "Applying patch '${file}' ... "
${RUN_AS} git ${GIT_PROJECT_PATH} apply --directory="${PROJECT_PATH}" --unsafe-paths "${file}"
echo "Done"
fi
done
I don't see any reason why it would stop applying the patches. If one failed, you might get some error output and then you said "Done" (which could be a little misguiding, I think) and then the for would continue.
I guess for starters you need to control if the patch was successful or not. something like this (adjust to your needs):
for file in ${PROJECT_PATH}/${PATCH_DIR}/*.patch; do
if [[ -e ${file} ]]; then
echo -n "Applying patch '${file}' ... "
${RUN_AS} git ${GIT_PROJECT_PATH} apply --directory="${PROJECT_PATH}" --unsafe-paths "${file}"
if [ $? -ne 0 ]; then
# there was an error
echo git apply of patch $file failed on $GIT_PROJECT_PATH/$PROJECT_PATH
else
echo "Done"
fi
fi
done
Or something along those lines.

test file existence from a symlink path

I have the following script:
if [ -e "/home/$USER/works/bash/mv-to-parent.sh" ] ; then
# do something...
else
echo "not found"
fi
executing it I get every time the "not found" message even if the file it's there; I've understood the problem is related to the fact that the "works" folder is a symlink ( /home/$USER/works -> /media/data/works).
Is it possible to make it work using the symlink path?
If you think the symlink is the issue, then use readlink -f, e.g.:
if [ -e "$(readlink -f "/home/$USER/works/bash/mv-to-parent.sh")" ] ; then
# do something...
else
echo "not found"
fi
However, I am not sure it will fix your issue. Except -h and -L, all file-based tests should already dereference symbolic links.

Capture output from git command?

I am writing a script to automate setting up new projects for me.
this includes pulling down a github repository.
What I want to do is have some output from my script, then call git clone $repo
I want to show the output from that command while it is running, but then when it has run if it has run successfully replace it's output (note just the git commands output, I still want the output from before that to be there) with repository successfully cloned and if failed just leave the output there, and print repository cloning failed.
How can I do this?
Below is my current (rather simple) script.
#! /bin/bash
# -p project name
templateurl="git#bitbucket.org:xxx/xxx-site-template.git"
while getopts ":p:" opt; do #eventually I'll add more options here
case $opt in
p)
project=$OPTARG
;;
\?)
echo "Invalid option: -$OPTARG" >&2
exit 1
;;
:)
echo "Option -$OPTARG requires an argument." >&2
exit 1
;;
esac
done
if [ -z "$project" ]; then
echo "Project name required"
exit 1
fi
clear
echo "|==========================|"
echo "| New xxx Project Creator |"
echo "|==========================|"
echo "Project: $project"
if [ -d "$project" ]; then
echo "Directory $project already exists!"
exit 1
fi
mkdir $project
if [ ! -d "$project" ]; then
echo "Failed to create project directory!"
exit 1
fi
echo "Cloning xxx Template repository"
git clone $templateurl $project
git clone does provide a exit code you can read with $? like follows:
git clone user#server:repo
echo $?
This will print 0 if everything worked just fine. If for example the folder is not a git repository you will get the exit code 128.
you can check if the clone worked as follows:
git clone user#server:repo localrepo --quiet
success=$?
if [[ $success -eq 0 ]];
then
echo "Repository successfully cloned."
else
echo "Something went wrong!"
fi
--quietwill suppress any output from git, as long as there are no errors. So if you just remove the else-branch you will get you positive output or the error produced by git.
git clone user#server:repo localrepo > git.log 2>&1
if [[ $? eq 0 ]];
then
echo Repository successfully cloned.
else
cat git.log
echo Repository cloning failed.
fi
rm git.log
Explanation:
git clone user#server:repo localrepo > git.log 2>&1
Redirects stdout and stderr streams to git.log. > git.log redirects stdout to git.log 2>&1 redirects stderr to the same place as stdout(thus, git.log).
$? eq 0 Checks the retcode of git which should be 0 if the clone was successful.
cat git.log outputs the contents of the git.log file.

Resources