Getting First Run status of Test Scripts in QC through OTA - ota

I would like to have the 'First Run Status' of the scripts present in a path through the OTA.
Ex:
If I give a QC Test Lab Path, I would like to see All the Test Scripts along with their First Run Status.
Say, 1234 is a Test Script. It might have either Passed or Failed or any other status on the First Run while executing. Then the next day the same script might have been altered to a new status if it is failed.
I would like to know the First Run Status.
It should access the RUN table to extract ALL RUNS of the Test Script from Test Lab whose path will be got as input from User.

If you have a list of runs (see this post to get it), just iterate over the runs and get the execution date and execution time fields to find the first run:
exec_date = run.Field("RN_EXECUTION_DATE")
exec_time = run.Field("RN_EXECUTION_TIME")
When you have the run you want, get the run status:
status = run.Status

Related

How to get value of particular column value after running openshift command using shell script?

I am running below command in openshift platform which produces some output
oc get pods
name status
job1 Running
job2 Completed
How to extract only status from the above result and store it in a variable using shell script.
ex : status=completed
How to extract only status from the above result and store it in a
variable using shell script.
ex : status=completed
Try status=$(oc get pods -o=custom-columns=STATUS:.status.phase --no-headers). You can echo $status and see the list of status saved in the environment variable.
If you add --output=json to your commands you can use JQ to select the status. I find bash scripts great for using commands but there are a lot of drawbacks when it comes to parsing output. With JSON you can regardless of the format select the correct key.

sqlplus does not execute the query if it is called by a ssh external connection

I have a script lying into a Unix server which looks like this:
mainScript.sh
#some stuff here
emailScript.sh $PARAM_1 $PARAM_2
#some other stuff here
As you can see, mainScript.sh is calling another script called emailScript.sh.
The emailScript.sh is supposed to perform a query via sqlplus, then parse the results and return them via email if any.
The interesting part of the code in emailScript.sh is this:
DB_SERVER=$1
USERNAME=$2
PASSWORD=$3
EVENT_DATE=$4
LIST_AUTHORIZED_USERS=$5
ENVID=$6
INTERESTED_PARTY=$7
RAW_LIST=$(echo "select distinct M_OS_USER from MX_USER_CONNECTION_DBF where M_EVENT_DATE >= to_date('$EVENT_DATE','DD-MM-YYYY') and M_OS_USER is not null and M_OS_USER not in $LIST_AUTHORIZED_USERS;" | sqlplus -s $USERNAME/$PASSWORD#$DB_SERVER)
As you can see, all I do is just creating the variable RAW_LIST executing a query with sqlplus.
The problem is the following:
If I call the script mainScript.sh via command line (PuTTy / KiTTy), the sqlplus command works fine and returns something.
If I call the script mainScript.sh via an external job (a ssh connection opened on the server via a Jenkins job), the sqlplus returns nothing and takes 0 seconds, meaning it doesn't even try to execute itself.
In order to debug, I've printed all the variables, the query itself in order to check if something wasn't properly set: everything is correctly set.
It really seems that the command sqlplus is not recognized, or something like this.
Would you please have any idea on how I can debug this? Where should I look the issue?
You need to consider few things here. While you are running the script, from which directory location you are executing the script? And while you are executing the script from your external application from which directory location it is executing the script. Better use full path to the script like /path/to/the/script/script.sh or use cd /path/to/the/script/ command to go to the script directory first and execute the script. Also check execute permission for your application. You as an user might have permission to execute the script or sql command but your application does not have that permission. Check the user id for your application and add that into the proper group.

Catch outcome of jenkins job in shell command variable

I have a job in Jenkins that uses two commands in a Execute Shell Command.
The first does a test job, the second creates a report out of this. It looks a littlebit like this:
node_modules/gulp/bin/gulp.js run-cucumber-tests
node_modules/gulp/bin/gulp.js create-cucumber-report
If there are test failures, the command will exit with code 1. This means the second command won't even be fired. But even though the first command failed, I want the report to be created!
What I've tried is to do this:
node_modules/gulp/bin/gulp.js run-cucumber-tests || true
node_modules/gulp/bin/gulp.js create-cucumber-report
Now the report does get created but the build is marked as succeeded. That's not what I want. I want the jenkins build job to eventually fail, but with the reports created.
I was wondering, maybe I can catch the outcome of the first command in a variable, continue with the second and then throw it after the second command.
You can use set +e to let the script continue even if an error occurred and then use $? to capture the result of the last command. With exit you can force the result code of the script to the previously captured value.
set +e
node_modules/gulp/bin/gulp.js run-cucumber-tests
RESULT=$?
node_modules/gulp/bin/gulp.js create-cucumber-report
exit $RESULT

How can I automate a script [Ruby] to run at a given time every day?

I've written a webcrawler that pulls information into a report and would like to run it every day at 12:00pm. The script is run using:
ruby script.rb
I've tried using the whenever gem (https://github.com/javan/whenever).
My directory structure is this:
/config
schedule.rb
script.rb
In my script.rb file, I have the following:
every :day, :at => '12:00pm' do
command "ruby script.rb"
end
I've modified the time :at to take see if it runs and it doesn't.
I've also tried:
every :day, :at => '12:00pm' do
`ruby script.rb`
end
I've also looked into the "at" linux utility but it appears suited to one-time jobs. I'd like this to generate a report everyday.
Note: the script specifies where to output so I don't need to give it an output.
I've also tried creating a crontab but have encountered a problem with saving.
I use http://crontab-generator.org/ to generate the correct syntax.
Then I run:
crontab -e
Which opens vi and I copy the syntax. However, it exits with a status of 1 and if I run:
crontab -l
It says there's no jobs listed.
I've also tried running this as the super user, and it exits the same.
The error message is
/usr/bin/vi" exited with status 1
I just want a command to run at a given time, what am I missing?
Edit
Does it matter that I'm on a Mac?

capture exit code from a script flow

I need help with some scripts I'm writing.
Scenario:
Script A is executed by a scheduling process. This script takes the arguments passed to it, parses them in some way and runs script B feeding it with those arguments;
Script B does sudo -u user ssh user#REMOTEMACHINE, runs some commands (in the remote machine) and finally runs script C (also in the remote machine). I am passing those commands using a HERE DOCUMENT. Also, I'm passing the previous arguments to this script too.
This "flow" runs correctly and the job completes successfully.
My problems are:
Since this "flow" is ran by a scheduling process, I need to tell it if the job completed successfully or not. I'm doing this via exit codes, so what I want is to have a chain of exit codes, returning back from the last script to the first, in case of errors. I'm not able to perform this, because exit codes works correctly for the single scripts (I tried executing them singularly and look for the exit codes), but they are not sended back to the parent script. In my opinion, the problem is that ssh is getting the exit code from the child script, which in fact ended successfully, because there was no error executing it: it's the command inside of it that gone wrong.
While the process works correctly, I still get this line:
ssh: Could not resolve hostname : Name or service not known
But actually the script completes successfully.
I hope you understand what I wrote, I can eventually post my scripts here.
Thanks
O.
EDIT:
This are the scripts. There could be some problem with variable names because I renamed it quikly to upload the files.
Since I can't upload 3 files because of my low reputation, I merged them in a single file
SCRIPT FILE
I managed to solve the problem.
I followed olivier's advice and used the escape char to make the variable expanded by the remote machine.
Also I implemented different exit codes based on where the error occured.
At last, I modified the first script as follows, after launching sudo -u for the second script:
EXITCODEOFTHESECONDSCRIPT=$?
if [ $EXITCODEOFTHESECONDSCRIPT = 0 ]
then
echo ""
echo "Export job took $SECONDS seconds."
echo ""
exit 0
else
exit $EXITCODEOFTHESECONDSCRIPT
fi
This way I am able to exit the main script MAINTAINING the exit code provided from the second script.
In fact, I found that the problem was that the process worked well, even in case of errors, but the fact that I was giving more commands after the second script fail (the echo command was enough) provided other exit codes that overwrited the one I wanted.
Thanks to all !

Resources