How does the Jenkins config file provider plugin work? - maven

How does the config file provider plugin work?
I have a Jenkinsfile for a declarative pipeline (multi-branch build) that contains:
configFileProvider([configFile(fileId: 'maven-settings', variable: 'MAVEN_SETTINGS')]) {
sh 'mvn -B -s $MAVEN_SETTINGS -DWHERE="$WHERE" deploy'
}
I have tried running this on two different Jenkins installations one which is installed directly and runs as daemon and another running as a container (jenkinsci/blueocean).
When run on the direct the Jenkins Config File Provider plugin is able to provide the required settings.
provisioning config files...
copy managed file [Maven settings] to file:/var/lib/jenkins/workspace/redacted#tmp/config8989354118161621860tmp
When run under the jenkinsci/blueocean container it fails with:
provisioning config files...
not able to provide the file [ManagedFile: id=maven-settings, targetLocation=null, variable=MAVEN_SETTINGS], can't be resolved by any provider - maybe it got deleted by an administrator
I have created a managed maven-settings.xml file with id maven-settings for blue/ocean but it is not being picked up.
I've also tried copying it to ~/.m2/settings.xml
By contrast the working installation does not have any managed files (settings.xml or otherwise) and I am unable to locate any maven settings file in the workspace. I'm not sure what the #tmp directory is. It is deleted by the time a build finishes.
So my question is:
Where should I put the settings to make the configFileProvider pass them on for the jenkinsci/blueocean build job?
How does the config file provider plugin work?
I have no idea what its doing so its hard to debug. The source is here but Java, Maven & Jenkins are not my main area.
What differences are there when Jenkins itself run as a container?
This answer suggests the the config file provider is unnecessary.
There is a similar question which is unanswered but it relates to a maven plugin.

I have part of the answer. My maven-settings file wasn't being picked up because I was using the file name rather than the file Id which is different.
The remaining part of the question is how is the original Jenkins instance able to generate this file without it being listed as a managed file.

Related

Configuring remote Gradle Build Cache on build server

Is it possible to configure a Gradle remote cache from the command line or on the daemon?
Ideally I'd like to configure our build server to use a remote cache without requiring all users of my build server to update all of their settings.gradle files.
I could also inject the required lines into the settings file if it doesn't exist potentially. I can't find any documentation so that leads me to believe this could be a bad idea.
Likely you'd do this with an init script, specified on the gradle command line with --init-script
See https://docs.gradle.org/current/userguide/init_scripts.html

Jenkins execute a job from work space instead of SVN path

I want to trigger a Jenkins job for a maven(v3.5.3) project from my local work space folder location instead of configuring SVN Repository URL in the Source code management section. Is there a way to achieve this?. I need to test with code modifications in the project and not wanting to commit the changes, that is the purpose.
I am using Jenkins (v2.161) and it is installed in another machine.
Thanks in Advance.
Although it might look like a sort of tinkering, the source code can be pulled to the Jenknins' host from your local machine, provided that they are properly configured to communicate via ssh.
In the project build configuration on Jenkins' host:
Do not use "Source Code Management" (choose "None").
Check "Delete workspace before build starts", to avoid conflicts with previous changes.
As the very first build step, add "Execute shell" and write a few commands that pull the data, for example:
scp myusername#myhost:/path/to/myworkspace/myproject/src .
scp myusername#myhost:/path/to/myworkspace/myproject/pom.xml .
# etc for all the files/dirs you need to build the project
Then continue with the build steps that were already used for building the project from SCM.

Setting an Endpoint in Jenkins

I currently have a SoapUI project which I intend to have executed periodically (every 5 minutes) in Jenkins. I've completed the following thus far:
Created the relevant directory in the Workspace i.e workspace\SOA\SOAProject\src\test\soapui\SoapUIProject.xml
I've configured a pom.xml which sits in the SOAProject folder alongside the src folder
I've created a Jenkins job (I've chosen a Maven project, although it should not be an issue if I had chosen a freestyle job)
My question is, how do I set the endpoint?
I've done the following...
Build
**Root POM** pom.xml
Goals and options
testrunner.bat -e0.00.0.006:8040
Edit:
I've installed the EnvInject plugin. I'm not sure how to create the /properties file and what to put in their in order to set the execution environment?
I don't know the answer, but my suggestion is to get it running via command-line first. Once you figure out how to launch it without Jenkins, having Jenkins issue the same command because easy.
If you choose a Maven project, there is a useful plugin to set the endpoint and different propeties for the testSuites...
https://github.com/redfish4ktc/maven-soapui-extension-plugin

ld: library not found for -lLIBRARY_NAME, only when trying to build using jenkins

I have a project that when built from my machine everything runs smoothly. However, when I push my changes and try to generate the build using jenkins, I'm getting the library not found error message.
This is weird because if I login into the jenkins server and I try to do a manual build, everything works as expected, perfect. It's only when I build from withing jenkins that my build is failing.
I'm assuming LIBRARY_NAME as the name of whatever library, in my case is an internal static library.
Any suggestions?
EDIT:
I've setup: Library Search Headers, Other Linker Flags, Target Dependencies, Proper Architectures
I wrestled with this for three weeks until I accidentally stumbled onto this with the help of a coworker. Basically, you must be sure that the path structure created in your repository is replicated in the .jenkins workspace. To do this just be sure that in your project/job configuration in jenkins the Source Code Management > Subversion > Repository URL field must be at the same level as specified in jenkins in your script or Build > Execute Shell > Command field. So here are my settings as an example:
above Jenkins URL = https://myDuncwa.local/duncwa-repo
above Jenkins Build = "cd $WORKSPACE/mobileapps/projects/PictureBoard/trunk/PictureBoard" and line 2 "xcodebuild -project PictureBoard.xcodeproj" no quotes.
above Subversion URL repository = https://myDuncwa.local/duncwa-repo
This will cause the directory structure mobileapps/projects/PictureBoard in my repository to replicated in the .jenkins/jobs/PictureBoard/workspace/mobileapps/projects/PictureBoard directory that is created automatically by jenkins. Note: 1) This will copy the entire repository so be prepared to change this later and 2) .jenkins is a hidden directory created by the install in the installing user's home directory or "~/"

How can I copy the artifacts from Teamcity to another server?

how can I copy the artifacts from Teamcity to another server?
Thanks
The way I have done this, make things a lot easier.. Setup another configuration that pulls in, via artifact dependencies, all the files you need then run a cmd script to xcopy/copy the files to another drive on the network. You can do this using cmd script, vbs, python, shell etc..
Remember, you only need to refer to directories as if they were local as you would have your script in the same working directory
i.e cmd script :: xcopy .\"my build artifact(s)" \path\to\drive\on\my\network\"my build artifacts"
It doesn't get easier than that.
Naturally, if your artifacts are huge, then you may want to consider your more complicated option. However, TeamCity currently have a ticket pending, which you can vote on, that allows you to run multiple runners in one configuration - so you could just add your cmd script to the same configuration to save the copy time; please vote if can spare a minute:
http://youtrack.jetbrains.net/issue/TW-3660
There is a Deployer plugin, that supports deploy by fileshare/SMB, FTP, SSH and other means. The usage is basically the same as the Artifact paths.
We have used just samba, so you must enter:
target Host path: //server/drive/myfolder
Username: mydomain\myusername - in our case we had to write domain
here too
Password: ****
Domain: mydomain
and in path just select the files as in artifacts:
product/* => product.zip
and it will create file //server/drive/myfolder/product.zip
You can do it from your build script or externally.
If you are looking to get artifacts copied from a remote build agent to the primary TeamCity server, you may want to look into configuring Build Artifacts under the General Settings.
According to TeamCity's wiki entry on BuildArtifacts (http://confluence.jetbrains.com/display/TCD7/Build+Artifact) "Upon build finish, TeamCity searches for artifacts in the build's checkout directory according to the specified artifact patterns. Matching files are then uploaded ("published") to the TeamCity server, where they become available for download through the web UI or can be used in other builds using artifact dependencies."

Resources