Kubernetes - check if resources defined in YAML file exist - bash

I am creating a bash script to automate certain actions in my cluster. One of the commands is: kubectl delete -f example.yaml.
The problem is that when the resources defined in the YAML do not exist, the following error is printed:
Error from server (NotFound): error when deleting "example.yaml": deployments.apps "my_app" not found
I am looking to add an additional step that first checks whether a set of resources defined in a YAML file exist in the cluster. Is there a command that would allow me to do so?
From the documentation, I found:
Compares the current state of the cluster against the state that the cluster would be in if the manifest was applied.
kubectl diff -f ./my-manifest.yaml
but I find it difficult to parse the output that it returns. Is there a better alternative?

To find out if the same object is already present in the cluster as exactly described in the manifest file. you can use the return code of the kubectl diff command.
Exit status:
0 No differences were found.
1 Differences were found.
>1 Kubectl or diff failed with an error.
Example:
kubectl diff -f crazy.yml &>/dev/null
rc=$?
if [ $rc -eq 0 ];then
echo "Exact Object is already installed on the cluster"
elif [ $rc -eq 1 ];then
echo "Exact object is not installed, either its not installed or different from the manifest file"
else
echo "Unable to determine the difference"
fi
Alternatively, if you want to really parse the output, you may use the following env variable to print the diff output in desired format:
KUBECTL_EXTERNAL_DIFF environment variable can be used to select your
own diff command. Users can use external commands with params too,

Related

passing null value from az cli to bash without exiting

I'm trying to get the value of a resource in azure via AZ CLI, and pass that value to a variable in bash.
id=$(az synapse workspace show -n $name -g $rsname --query 'identity.principalId' -o tsv 2<&1)
if [[ $id == *"Not Found"* ]];
then
echo "Workspace already deleted."
fi
If the resource is not there, I am redirecting the output to the variable with 2<&1 so I can deal with it in the if-then conditional. $id is getting assigned the output correctly, but AZ CLI is still exiting the script with error "not found".
Is there anyway to keep it from exiting?
In your bash command you are using 2<&1 that's why it exited the script with error "not found"
You can achieve this by using "2>&-".
Make sure you have to use the Greater than (>) symbol.
id=$(az synapse workspace show -n '<synapse Name>' -g '<Resource Group Name>' --query 'identity.principalId' -o tsv 2>&-)
using "2>&-":
I can be able to get the principal id.
using "2>&1":
here I am not able to get the principal id.
Reference
using bash check the program is exist or not
Usage of bash IO Redirection

Module files don't return non-zero exit codes to bash if the module fails to load. How can you make a conditional in bash with that?

I'm new here, so I apologize in advance if I'm not following the protocol, but the message said to ask a new question. I had asked an earlier question: How can a bash script try to load one module file and if that fails, load another?, but it is not a duplicate of Bash conditional based on exit code of command as marked.
The reason is that the module load does not return a non-zero exit code if it fails to load. These are the Environment Modules that I am trying to use.
For example,
#!/bin/bash
if module load fake_module; then
echo "Should never have gotten here"
else
echo "This is what I should see."
fi
results in
ModuleCmd_Load.c(213):ERROR:105: Unable to locate a modulefile for 'fake_module'
Should never have gotten here
How can I attempt to load fake_module and if that fails attempt to do something else? This is specifically in a bash script. Thank you!
Edit: I want to be clear that I don't have the ability to modify the module files directly.
Use the command output/error instead of its return value, and check for the keyword ERROR matches your output/error
#!/bin/bash
RES=$( { module load fake_module; } 2>&1 )
if [[ "$RES" != *"ERROR"* ]]; then
echo "Should never have gotten here" # the command has no errors
else
echo "This is what I should see." # the command has an error
fi
Old versions of Modules, like version 3.2 you use, always return 0 whether it has failed or it is successful. With this version you have to parse output as proposed by #franzisk. Modules returns its output on stderr (as stdout is used to trap environment changes to apply)
If you do not want to rely on error messages, you can list loaded modules after the module load command with module list command. If module is not found in module list command output it means module load attempt failed.
module load fake_module
if [[ "`module list -t 2>&1`" = *"fake_module"* ]]; then
echo "Should never have gotten here" # the command has no errors
else
echo "This is what I should see." # the command has an error
fi
Newer versions of Modules (>= 4.0) now return an appropriate exit code. So your initial example will work this these newer versions.

Jenkins build passes when the shell script has execution errors in it

I have a shell script that executes multiple sql files that updates to the database. I am calling the shell script from jenkins- build- execute shell. The jenkins console shows success at all times irrespective of the errors from the sql files. I want Jenkins to fail the build, if there is an error or any of the sql file failed executing and send the console output to the developer, if fails.
I tried echo $? in the shell script but it shows 0.
#!/bin/bash
walk_dir () {
shopt -s nullglob dotglob
for pathname in "$1"/*; do
if [ -d "$pathname" ]; then
walk_dir "$pathname"
else
case "$pathname" in
*.sql|*.SQL)
printf '%s\n Executing SQL File:' "$pathname"
sudo -u postgres psql <DBName> -f $pathname
rm $pathname
esac
fi
done
}
DOWNLOADING_DIR=/home/jenkins/DB/
walk_dir "$DOWNLOADING_DIR"
Jenkins Console results
ALTER TABLE
ERROR: cannot change return type of existing
DETAIL: Row type defined by OUT parameters is different.
CREATE FUNCTION
ALTER FUNCTION
CREATE FUNCTION
ALTER FUNCTION
Finished: SUCCESS
Expected Results: Failed from Jenkins (if any of the sql files failed executing from shell script) but it is showing as passed in Jenkins
Thanks for all the inputs. I was able to fix this issue. I installed 'log parser plugin' in Jenkins which will parse the keywords like /Error/ in the console output and make the build to fail.
This S/O answer will probably address your scenario: Automatic exit from bash shell script on error
This duplicate answer also provides useful guidance:Stop on first error [duplicate]
Essentially use set -e or #!/bin/bash -e.
If you don't trap every potential error, then the next step in the script will execute and the return code will be that of the last command in the script.
Direct link to www.davidpashley.com - Writing Robust Bash Shell Scripts
** This also assumes any external commands (eg: psql) also properly trap and return status codes.
Below code may help you. This is how i sorted the issue with mine.
Instead of sudo -u postgres psql <DBName> -f $pathname
Use below code:
OUTPUT=$(psql -U postgres -d <DBName> -c "\i $pathname;")
echo $OUTPUT | grep ERROR
if [[ $? -eq 0 ]]
then
echo "Error while running sql file $pathname"
exit 2
else
echo "$pathname - SQL file Executed, successfully"
fi
Jenkins will not provide you the SQL files error code. Jenkins just checks if your shell script is executed or not and based on that it will return a status code which generally is zero as it is executing the shell script successfully.

Passing variables to sftp batch

I'm writing a script that needs to pass a variable to the sftp batch. I have been able to get some commands working based on other documentation I've searched out, but can't quite get to what I need.
The end-goal is to work similar to a file test operator on a remote server:
( if [-f $a ] then:; else exit 0;)
Ultimately, I want the file to continue running the script if the file exists (:), or exit 0 if it does NOT exist (not exit 1). The remote machine is a Windows server, not Linux.
Here's what I have:
NOTE - the variable I'm trying to pass, $source_dir, changes based on the input parameter of the script that calls this function. This and the ls wildcard is the tricky part. I have been able to make it work when looking for a specific file, but not just "any" file.
${source_dir}= /this/directory/changes
RemoteCheck () {
/bin/echo "cd $source_dir" > someBatch.txt
/bin/echo "ls *" >> someBatch.txt
/usr/bin/sftp -b testBatch.txt -oPort=${sftp_port} ${sftp_ip}
exit_code=$?;
if [ $exit_code -eq 0 ]; then
:
else
exit 0
fi
There may be a better way to do this, but I have searched multiple forums and have not yet found a way to manipulate this.
Any help is appreciated, you gurus have always been very helpful!
You cannot test for existence of any file using just exit code of the OpenSSH sftp.
You can redirect the sftp output to a file and parse it to see if there are any files.
You can use shell echo command to delimit the listing from the rest of the output like:
!echo listing-start
ls
!echo listing-end

Shell script to check whether the mounted path given in fstab/mtab is valid

I have mounted several windows machine in Linux machine through fstab.
Eg:
*//vmdevmachine/sharedfolder /var/lib/jenkins/Windows/ cifs
gid=users,file_mode=0664,dir_mode=0775,auto,username=user_name,password=password123*
I have few jobs to run, that are dependent on mounted machines like this.
My requirement: Before running the job I need (shell script) to check whether the mounted file/directory is valid/exist or not.
i.e from the above example, it need to check whether **//vmdevmachine/sharedfolder** exist or not.
Thanks,
-Rajiv
If the directory is mounted currently and does not have a link such as a symlink to it I generally use something like the below.
if [ ! -d "/var/lib/jenkins/Windows/" ]; then
exit $? # The script will exit with exit status from If statement.
fi
If the directory (-d) does not exist (!) then exit with the exit code from the if statement. If you are going to be doing multiple directories may want to put them in an array and iterate through them quick like so
files=( "/var/lib/jenkins/Windows/" "/var/lib/jenkins/Windows/2" "/var/lib/jenkins/Windows/3" )
for i in "${files[#]}"
do
if [ ! -d "$i" ]; then
exit $? # The script will exit with exit status from If statement.
fi
done
In the above for loop example $i will be the directory during each instance of the for loop. I hope this helps you out as I am still getting used to posting here.

Resources