Jenkins always SUCCESS state when the Shell script actually failed - shell

I'm facing this challenge in my current Jenkins setup. Where the set of cases like Shell(bash) script executed remotely:
Permission denied while my installer copied
Unable to connect with SSH
Any suggestion on these cases how can I fix it? any pointers?
Thanks in advance

A pipeline will fail if a script / software returns a value not equal zero. There are programs like Robocopy that execute a command, fail and return a 0. Jenkins does not understand that the program was not successful and marks the pipeline as a success.
Basically this is what you have to do. If your script returns a value not equal zero the pipeline will fail.

Related

How to call ODI Scenario from command line and wait for its execution

I'd like to call an ODI Scenario from command line and wait until its done. I am using ODI 12c and installed a standalone agent. I already found out that you can use the startscen.cmd command and it works for me. The only problem is that cmd is not waiting for the scenario to be done. Any Suggestions to achieve sth like that?
My .bat-file looks like this:
cd C:\Oracle\Middleware\Oracle_Home\user_projects\domains\base_domain\bin
call startScen.cmd "-INSTANCE=CITestAgent" MAPPING 1_0_0 GLOBAL "-SESSION_NAME=TEST_RUN" "-AGENT-URL=http://localhost:20910/oraclediagent"
cd C:\Users\Redekera\documents\testFiles
"C:\Users\REDEKERA\Documents\instantclient_19_3\sqlplus.exe" db_user/pw#db/scheme #run_tests_lieferschein.sql
After that command i'd like to run an sql via sql*plus, which needs to wait until the scenario has finished.
Thanks for help guys :)
By default startscen.cmd will wait for the end of the execution to return.
This can be changed with parameter -ASYNC=yes to start the execution asynchronously. In that case it would return the SESSION number that is useful to check the status of execution.
If you want the second command to execute only if the first exited successfully:
execute scenario command && sql*plus command
Extracted from here
The main idea is the “&&” sign!

Running shell script from jenkins

When i try to execute the jobs from terminal it works for one hour without any issue. when i try to execute shell from Jenkins it works for just one minute and stops the execution. The output from Jenkins console output as follows :
Creating folder path in /jenkins/workspace/load_test/scripts/loadtest/loadtest1
PWD is : /jenkins/workspace/load_test/scripts/loadtest
Running /jenkins/workspace/load_test/scripts/loadtest/loadtest1/testRestApi.sh
1495126268
1495129868
3600
Process leaked file descriptors. See http://wiki.jenkins-ci.org/display/JENKINS/Spawning+processes+from+build for more information
Finished: SUCCESS
Any ideas/ suggestion to make the script run for one hour from Jenkins job ?
Have you tried with BUILD_ID=dontKillMe it is commonly used for daemons. https://wiki.jenkins-ci.org/display/JENKINS/ProcessTreeKiller however this should let you run your script

Psexec failing when running multiple commands in sequel

Using windows task scheduler i am running multiple commands, I'll call them task1.bat, task2.bat, and task3.bat . Each one of these scrips runs a different Psexec command (psexec version 2.11).
When running task1.bat, task2.bat, and task3.bat indivdually, these scripts run successfully; however when run in succession, task1.bat will run successfully, then task2.bat and task3.bat will usually fail with the error "Couldnt access servername. Access is denied. The syntax of the command is incorrect".
It seems like an error with Psexec, since when run individually the commands works fine. Is there a way to force Psexec to exit/end before moving onto the next script (besides just putting in a timeout)? It seems like psexec is hung which is causing the next to fail.
The .bat script will run sequentially if you create and run the batch file:
CALL task1.bat
CALL task2.bat
CALL task3.bat

Ruby expect sft script runs from command line, fails to log in from cron

I have a ruby program that uses expect to perform a password-login sftp session.
When I run the program from the command line, it works, but when I let cron run it, the remote server never acknowledges that the program has sent the password.
A shell-based expect script was written by somebody else, which works from cron
Is this problem related to ptys? Any ideas on how to troubleshoot?

Running remotely Linux script from Windows and get execution result code

I have the current scenario to deal with:
I have to schedule the backup of my company's Linux-based server (under Suse Linux) with ARCServe R15 (installed on Windows 2003R2SP2).
I know I have the ability in my backup software (ARCServe) to add pre/post execution scripts to my backup-jobs.
If failure of the script, ARCServe would be specified NOT to run the backup-job, and if success, specified to be run. I have no problem with this.
The problem is, I want to make a windows script (to be launched by ARCServe) for executing a Linux script on the cluster:
- If this Linux-script fails, I want my windows-script to fail, so my backup job in ARCServe wouldn't run
- If the Linux-script success, I want my windows-script to end normally with error code 0, so my ARCServe job would run normally.
I've tried creating this batch file (let's call it HPC.bat):
echo ON
start /wait "C:\Program Files\PUTTY\plink.exe" -v -l root -i "C:\IST\admin\scripts\HPC\pri.ppk" [cluster_name] /appli/admin/backup_admin
exit %errorlevel%
If I manually launch this .bat by double-clicking on it, or launching it in a command prompt under Windows, it executes normally and then ends.
If I make it being launched by ARCServe, the script seems never to end.
My job stays in "waiting" status, it seems the execution code of the linux script isn't returned to my batch file, and this one doesn't close.
In my mind, what's happening is plink just opens the connection to the Linux, send the sript execution signal, and then close the connection, so the execution code can't be returned to the batch. Am I right ?
Is what I want to do possible or am I trying something impossible to do ?
So, do I have to proceed differently ?
Do I have to use PUTTY or CygWin instead of plink ?
Please, it's giving me headaches ...
If you install Cygwin, you could do it exactly like you can do it on Linux to Linux, i.e. remotely run a command with ssh someuser#remoteserver.com somecommand
This command will return with the same return code on the calling client, as the command exited with on the remote end. If you use SSH shared keys for authentication instead of passwords, it can also be scripted without user interaction.

Resources