error while running shell script through jenkins pipeline - bash

Getting below error while trying to run shell script
+ /home/pqsharma/symlinkBuild.sh 19.07
sh: line 1: 21887 Terminated sleep 3
With Jenkinsfile:
node ('linux')
{
stage('creating symlink')
{stdout = sh(script:'/home/pqsharma/symlinkBuild.sh 19.07 ', returnStdout: true)
}
}

This is followed by JENKINS 55308: "intermittent "terminated" messages using sh in Pipelines"
Jenkins master runs from a Docker image based on jenkins/jenkins:2.138.2-alpine with specific plugins baked into the image by /usr/local/bin/install-plugins.sh
The message originates in durable-task-plugin, which must be a dependency of one of the plugins.txt plugins.
Check if this is the case for you.
Caused by JENKINS 55867: "sh step termination is never detected if the wrapper process is killed"
When you execute a shell step, Jenkins runs a wrapper shell process that's responsible for saving the exit code of your script. If this process is killed, then Jenkins never discovers that your script has terminated, and the step hangs forever.
This seems to have been introduced after v1.22 of durable-task-plugin
Diagnostic:
The sleep 3 is part of the execution of a shell step.
A background process touches a specific file on the agent every 3 seconds, and the Jenkins master checks the timestamp on that file as a proxy to know whether the script is still running or not.
It seems based on the reports here that something is causing that process to be killed on some systems, but I don't have any ideas about what it could be offhand.
Possible cause:
The bug is not just in the durable-task-plugin, although the symptoms come from there. It is introduced when you upgrade workflow-job. I have managed to pinpoint it down to a specific version.
Upgrading workflow-job to 2.27 or later triggers the bug. (2.26 does not exist.)
So try and downgrade your workflow-job plugin to 2.25

Related

How to fail Azure devops pipeline task specifically for failures in bash script

I am using Azure Devops pipeline and in that there is one task that will create KVM guest VM and once VM is created through packer inside the host it will run a bash script to check the status of services running inside the guest VM.
If any services are not running or thrown error then this bash script will exit with code 3 as i have added the value in bash script as below
set -e
So i want the task to fail if the above bash script fails, but issue is in the same task as KVM guest VM is getting created so while booting up and shutdown it throws expected errors but i dont want this task to fail due these error but to fail it only bash scripts fails.
i have selected the option in task "Fail on Standard Error"
But not sure how we can fail the task specifically for bash script error, can anyone have some suggestions on this?
You can try and use exit 1 command to have the bash task failed. And it is often a command you'll issue soon after an error is logged.
Additionally, you also may use logging commands to customized a error message. Kindly refer to the sample below.
#!/bin/bash
echo "##vso[task.logissue type=error]Something went very wrong."
exit 1

Jenkins User Not Accessible In Bash and Builds Not Working

I am trying to become a Jenkins user through this command - sudo su -s /bin/bash jenkins.
When I do run it, this happens in my shell: bash-4.2$ Killed.
I have been trying to debug it by identifying the memory allocation used on the server (we have plenty), reading debug lines of processes being killed, and the jenkins log but there's nothing that leads me towards a solution.
How do I start to debug this problem and what could be going on? There aren't any obvious errors being thrown my way.
I've discovered that when I restart the server, I can log in as the jenkins user and the jenkins builds work. However, shortly thereafter, I cannot log in and the builds no longer work.

Jenkins job windows batch execution 20 times slower than executing in cmd.exe

I just installed Jenkins 2.46.2 on a Windows 2012 Server \o/. It runs as a system service.
I created a job that execute a windows batch (.bat) script to build a code project. This batch results in executing 2 mingw32-make.exe commands to clean and then build a full binary from source code.
Executing the batch manually on the machine, located on the same filesystem (same workspace as used by the Jenkins' job, local disk - not network disk), the clean-build takes ~50 seconds.
But when executed by Jenkins, the job takes more than 20x more time longer! (~19 minutes). It terminates succesfully with the same behavior as executed manually in cmd.exe.
I changed the launch arguments for the jvm in the jenkins.xml file with "-Xmx1024m -XX:MaxPermSize=512m" options as I have read in the documentation to improve performance. But it does not fix anything :-(
Also when I monitors the CPU/disk/RAM usages they all stay very very low while building, so I deduce that brute performances of the machine are not in cause.
Whether I invoke the batch with call statement in the Jenkins job build step or not does not change anything : the job always last 19 minutes.
Can anybody help me to investigate why so slowness ?
Thanks in advance :)
I had a similar problem. I noticed that .bat files with echo Hello World ran fast and with no problem.
But once I tried to launch any grep.exe from a batch script, it took 24 seconds (in my case) to run even with no input files. If launched manually it finishes in no time.
I used grep.exe version 2.5.4 from MSys 1.0 distribution.
The solution in my case was rather unexpected - I updated grep to version 2.24, and now, being launched from Jenkins, it takes less than one second to process over 1 MB log file.
For a couple of day investigation, I finally find the cause.
In my case, it is the reason of Jenkins agent.
When I install Jenkins agent as a windows service in the slave agent, the consuming time is so huge,but when I try to start Jenkins agent via windows command line, the consuming time is as normal as executing the batch file manually.
My env:
master: CentOS7
slave agent: win 7
And I also test this case in a slave agent of win 10 for comparison.
The time executing via Jenkins is approximately the same as executing the batch file manually on the agent machine.
So I guess this is the compatibility issue between win 7 and Jenkins.
But for that the Jenkins official said that Jenkins not support win 7 anymore (Microsoft does not support Windows 7), we temporarily put it aside.
Anyway we find a way to conquer this. Hope this will help you for similar scenario.

Jenkins Workflow sh Step Hanging

I currently have a problem with a shell step in a workflow script hanging. The step appears to complete but the workflow doesn't move on, the Jenkins java process also begins to consume large amounts of CPU.
Jenkins is running on an OS X box and the sh step is a call to xbuild to build a Xamarin app.
def shell = "xbuild /p:Configuration=${buildConfig} /t:Build ${_solution.getPath()}"
sh("${shell} >> ${_logFile.getPath()}")
The contents of the log file suggest that xbuild completed succesfully but the workflow never moves on from the sh step.
Could anyone suggest a strategy to find out what is causing it to hang?
This turned out to be caused by a tight loop I had that was executing after the sh step completed.
My recommendation to anyone else experiencing a problem like this would be to make good use of logging to the console output so you can see exactly where the build is stuck.

start daemon on remote server via Jenkins SSH shell script exits mysteriously

I have a build job on jenkins that is building my project and after it is done, it opens an ssh shell script on a remote server and transfers files and then stop and starts a daemon.
When I stop and start the daemon from the command line on a RHEL server, it executes just fine. When the job executes in jenkins, there are no errors.
The daemon stops fine and it starts fine. But shortly after starting, the daemon dies suddenly.
sudo service daemonName stop
# transfer files.
sudo service daemonName start
I'm sure that the problem isn't pathing
Does anyone know what could be special about the way Jenkins is executing the ssh shell script that would cause the daemon start to not fully complete?
The problem:
When executing a build through jenkins, the command to start the daemon process was clearly successfully executing, yet after the build job was done, the daemon would suddenly quit.
The solution:
I thought for this whole time that it was jenkins killing the daemon. So I tried many different incarnations and permutations of disabling the ProcessTree module that goes through and cleans up zombie child processes. I tried fooling it by resetting the BUILD_ID environment variable. Nothing worked.
Thanks to this thread I found out that that solution only works for child processes executed on the BUILD machine. I.E. not applicable to my problem.
More searching led me here: Run a persistent process via ssh
The solution? Nohup.
So now the build successfully restarts the daemon by executing the following:
sudo nohup service daemonname start
Jenkins watches for processes spawned by the job and kill them to avoid zombie processes.
See https://wiki.jenkins-ci.org/display/JENKINS/ProcessTreeKiller
The workaround is to override the BUILD_ID environment variable:
BUILD_ID=dontKillMe

Resources