How to get node name frome previous build in jenkins pipeline - jenkins-pipeline

I know you can get NODE_NAME from the current build using
env.NODE_NAME
But when I tried to figure out the build machine from the previous build
Jenkins.instance.getItemByFullName('jobname').lastSuccessfulBuild.getEnvironment()
The result doesn't contain the NODE_NAME which I need.
Is there another way to find out the NODE_NAME?

Related

How to run custom command on rundeck host in Rundeck?

I'm creating Rundeck job and I want to run a custom command (e.g. git pull) on Rundeck host itself within a job working on other nodes. I can see the Command or Script node steps, but is there a matching workflow step?
Context
I'm pretty new to Rundeck, so here's some context on what I'm trying to achieve, maybe my whole design is wrong. I'm pretty familiar with Ansible though and I'm trying to integrate it with Rundeck to treat Rundeck as an executor of Ansible scripts
We're developing some software product, which is on-prem solution and is quite complex to install (requires deep OS configuration). We want to develop it in Continous Delivery fasion, as our cloud products are. So in git repository, along the product, we keep Ansible workspace (playbooks, roles, requirements, custom tasks - everything exept inventory) and on every commit Ansible workspace should be compatible with particular product version.
My current approach is following: build pipeline publishes as artifacts both build of the product and zipped Ansible workspace. Whenever we want to deploy it, we would run Rundeck job, which:
downloads Ansible workspace from artifacts (alternative idea: pulls repository in proper commit)
runs Ansible playbook (via Ansible workflow step), which does the stuff on selected nodes
How can I perform this first step? From what I can see I can run script or command on nodes (but in particular job run nodes are the target machines, not rundeck host). There is also SCM git plugin for Rundeck, but it can load jobs from repository, not Ansible workspace
A good approach for that:
Integrate Rundeck with Ansible following this (consider that Rundeck and Ansible must coexist in the same server).
Create a new job, by default new Rundeck jobs are configured to run locally.
In The first step you can add a "script step" (inline script) moving to the desired directory and the git pull command (also, you can use this approach to cloning the repo if you need that).
Now, the next step could be the execution of your ansible playbook (Ansible Playbook Workflow Node Step) in your job.

Committing a docker container after build fail

I'm trying to use the Docker plugin in Jenkins to commit a docker container when the build running on it fails. Currently I have a Jenkins server with ~15 nodes, each with its own docker cloud. The nodes all have the latest version of docker-ce installed. I have a build set up to run on a docker container. What I want to do is commit the container when the build fails. Below are the things I have tried:
Adding a post build task, where I obtain the container ID and the hostname of the node running the container. I then SSH into the node and then commit the container.
The problem: Not able to SSH from inside a container as it requires a password and there's no way adding the node to the container's list of known hosts
Checking the "commit container" box in the build's general configurations
The problem: This is probably working but I don't know where the container is being committed to. Also this happens every time, and not just when the build fails.
Using the build script
Same problem as using the post build task
Execute a docker command (Build step)
This option asks for the container ID, which I have no way of knowing as it is new every time a build is run.
Please let me know if I have misunderstood any of the above ways! I am still new to Jenkins and Docker so I am learning as I go. :)

sh file is not running on slave node in jenkins?

Hi I am fairly new to jenkins config and I am struck on running sh file on slave node. I have created two jobs one is creating some .sh and .jar file and other is copying it to all the slave node, after the build I need to run the .sh file which is running on local but not running on master. I am specifying the path but Jenkins is always running some blank .sh file from tmp folder.
where as in job config I have given this
the slave.sh file is present on remote slave but jenkins is not running it, what is the possible cause?
I really do not understand what you are trying to do. You have a very strange way to dividing the work and then executing part of it in a post-build step. There should be very few use-cases for using post-build step. Maybe you could just try to execute the slave.sh script in a normal build step? And maybe execute it directly from the source location without copying it to another location.
If I'm missing something and it really is necessary to execute slave.sh in a post-build step, please verify the path to the script is correct. There are several similar but slightly different paths in your question and I cannot say if that is on purpose but probably not.

Share Timestamp within Jenkins

I do have a jenkins job where I execute command from within Maven plugin which executes ant build script. The job also does 2 ant calls as there are 2 mirror servers. Something like this:
usr/bin/ant -v -d -f /utils_repo/build.xml ${target} -propertyfile /tmp/myjob/install.properties
Where Maven connects to each server and executes something similar.
My question is how can I share timestamp of when jenkins job starts within 2 instances of ant calls. In my ant job I have a backup build step before rolling in a new code, but I need to put the logic if dump/backup was done on the first host, do not do it on the second one as they do share mysql instance and core files on nfs mount, What happens right now is there is no logic and when second ant call runs dump on the second server it overwrites the previous dump from the first instance with the new data and updated mysql.
So I was thinking on creating a touch task to touch some file since I have shared directory between 2 servers, but I have the same build.xml for both server instances, so the touch will executed on the second ant call and overwrite the modification time of the first ant call.
I thought of if I could share jenkins timestamp property of when job starts within 2 ant jobs. Do not know if this is possible.
Thanks in advance for advise.
I suggest you should use the environment variable BUILD_NUMBER set by Jenkins and make it stored on you nfs, as a property file for instance.
So if that property file doesn't exists, or if the value of the property loaded from there doesn't match the environment variable set by Jenkins, it means the current node is the first to run for that Jenkins job. So it can do the backup. And that node would overwrite the shared property file with the current build number.
If the loaded property match the current build number, then it means the first run has already been done, no backup to do.
Implementation hints:
add the build number to the command line
usr/bin/ant -v -d -f /utils_repo/build.xml ${target} -propertyfile /tmp/myjob/install.properties -Dexpected.jenkins.build.number=$BUILD_NUMBER
use ant contrib if/then/else tasks: http://ant-contrib.sourceforge.net/tasks/tasks/index.html
write the property file with:
<echo file="/mnt/nfs/shared/jenkins.properties">jenkins.build.number=$expected.jenkins.build.number</echo>

Is it possible to get the raw build log from a TeamCity build?

Is it possible to get the raw build log from a TeamCity build? I've written a custom test runner that gets run as a commandline build step and reports test results back by printing ##teamcity... lines to stdout. The build log from TeamCity seems to be stripping these out when it recognises them. I'd like to see the raw output to help debug my test runner.
Update:
Apparently this simply isn't possible. neverov (I assume Dimitry Neverov of JetBrains?) has explained this and given a workaround so I've accepted his answer.
You can see the raw output from the build agent by looking in the agents /logs directory. This shows the unparsed data that is being hidden on the build output shown in the TeamCity console.
For example c:\TeamCity-Agent\logs\teamcity-build.log.
You can download it by clicking "Download full build log" on build log page.
I couldn't quite tell if this is what you were talking about when you refer to ##teamcity... lines in your question, but this is what I'm currently doing for command-line build steps (which is currently all I do):
##teamcity[testStarted name='dummyTestName' captureStandardOutput='true']
echo "Do your command-line build steps here."
##teamcity[testFinished name='dummyTestName']
It's sort of a hacky workaround, but it will result in stdout/stderr being displayed on the build log page in the TeamCity web UI.
I see that this question was asked long time ago (almost 10 years ago) but nothing changed in TeamCity.
I faced similar issue with test reporter and found a way to get raw log without connecting to build agent and getting it from there (it may be difficult). My solution does not cover the whole build log, but can be helpful when step is run via custom script in Build Steps.
So the solution is to add | tee e2e_raw.log into required build step script. For example we run tests in Docker by running docker-compose command:
tee will duplicate all the output into the file. Original output will be the same and will be parsed by TeamCity as usual.
You should also add a line into artifacts field to make build able to collect newly created artifact (Build General settings):
After that you will see a new archive in artifacts tab with raw log for this build step:
Great answers here before me. I would add that your TeamCity master holds log files for the builds, and you can get them on the command line.
Have a look in <TeamCity Data Directory>/system/artifacts/<project ID>/<build configuration name>/<internal_build_id>/.teamcity/logs.
This mattered to me because
The logs on the TeamCity agents were getting removed after a day or so, but the logs on the master were still available.
I wanted to grep them on the machine itself without having to download multiple, sizeable log files or use my web browser to make multiple page views.
There's an option on the build log to see 'detailed / verbose' - it shows all the service messages. I've seen it since TC9.

Resources