Cloud Build cross-config step execution - google-cloud-build

I am busy optimising some of our Cloud Build configuration files and identified several common steps that are present in multiple configuration files. For example, Config 1 contains the following:
Action A
Action B
and Config 2 contains the following:
Action A
Action C
For a real-world example, Action A could be the step that pulls a sub-module repository from Source Repositories or a docker-compose build step.
Is it possible to execute the steps in a central configuration file from other configuration files to reduce duplicate steps?
For example, Action A is a shared step and is moved to a central configuration file and called by the original configuration files.
Shared.yaml
Action A
Config 1
<Execute Shared.yaml>
Action B
Config 2
<Execute Shared.yaml>
Action C

Related

TeamCity Artifact Dependencies target directory same source directory

I have 4 teamcity build configuration like this
build, test, static analysis is parallel, sonar wait 3 steps finish and use test results and analysis results for create sonar report.
build ------------|
test -------------|
static analysis --|
|----- Sonar report
I want use test step results (.exec file for coverage) for sonar step, as I understand, i need use artifact dependencies for this, BUT test step create a many files in build directories like this
/{moduleName}/build/jacoco/*.exec
i have >100 modules
how i can setup artifact dependencies for files transfer to next step without change directory like this -
(source dirs) (target dirs)
**/build/jacoco/*/*.exec => **/build/jacoco/*/*.exec
i can't write this paths in manual
UPD: QUESTION RESOLVED
if use pattern in artifact publish like +:**/build/jacoco/*/*.exec (without target), all artifacts save position in directory tree

Deploying Oracle Service Bus With Maven: Deploys Fine From One Directory But Fails From Another

I'm attempting to create an automated build and deployment for an OSB (Oracle Service Bus) composite. Such a system consists of two commands (run via command prompt from the directory in which the POM resides) after setting up Maven and the OSB plugin on the build server:
mvn package
mvn deploy -DoracleServerUrl=http://serverurl:port -DoraclUsername=username -DoraclePassword=password
This fails in the build system with the following exception:
The session cannot be activated due to the existence of conflicts.
But I believe, at it's core, this is because the build system creates the package with the first command during the build phase, and then deploys with the second command during the release phase.
If I take the code directly and run the two commands from the directory 1:
D:\OSBComposites\HelloWorldOSBService\HelloWorldOSBService
the commands run and the composite deploys fine.
If I literally copy the same code from Directory 1 to Directory 2 and run the same commands from directory 2:
D:\OSBComposites\HelloWorldOSBService\HelloWorldOSBService2
the second command fails with the same exception cited above.
This isn't a one-off situation either - I can recreate it dozens of times consistently. Running the commands from Directory 1 always succeeds while running the commands from Directory 2 always fails with the exception noted above.
And yes, this is a simple default HelloWorld composite - as simple as can be with no references to absolute paths.
Is there a cache in Maven or OSB that's "remembering" the original path from which the composite was first deployed or some other mechanism that prevents a composite from being deployed from a different location?
If your pom.xml resides in /path/directory1/pom.xml , your OSB project would get deployed as directory1 - redeploying as directory2 could then cause conflicts that you observe.
If you need to deploy it from a different location, you could place it in /path2/directory1/pom.xml
For your example, this should work:
Copy your project's content to the path similar to below and then run the maven deployment
D:\OSBComposites\HelloWorldOSBService2\HelloWorldOSBService

TeamCity Build Chain configuration

I have a TeamCity project which includes 4 configurations and the build chain needs to look something like this:
Build which can be triggered manually and executes .bat scripts that compiles a bunch of artifacts for the Deploy and TEST to pick up.
Deploy and TEST – Region 1 has an artifact dependency on the Build config.
Deploy and TEST – Region 2 has an artifact dependency on the Build config.
Since I wanted both Region1 and Region2 to run in parallel as soon as Build is successful, I added a Snapshot dependency to Deploy and TEST – Region 1 and Deploy and TEST – Region 2 on Build config
Now I need to configure the Test Status config just to report the failure/success of the previous config (Deploy and TEST configs).
How can this be achieved? Also, do I need to tweak my set up anywhere for the use case I am trying to achieve?
The setup looks correct. To get the build chain status in Test Status configuration, you need to add snapshot dependencies on Deploy and TEST – Region 1 and Deploy and TEST – Region 2 configurations. If any build from the chain fails, Test Status build will also fail with status: "Snapshot dependencies failed:​ .​.​.​ < build configurations names >"
If you add these snapshot dependencies and run Test Status via UI, the whole build chain will be added to the queue. Also you can configure one VCS trigger in Test Status build configuration with option "Trigger on changes in snapshot dependencies". With this options enabled, the whole build chain will be triggered even if changes are detected in dependencies, not in the resulting build.
This article can be helpful.

Jenkins CI: Where and how store configuration files?

I am in process of moving configuration parameters out of Java application. I discover that the best approach is to extend your classpath and use .properties files (leave ZooKeeper alone for another requirement).
So my WAR file no longer have any hosts/IPs/URLs, users/passwords.
DevOps distribute configs manually across test, stage, stable installations.
Now time for Jenkins to run tests. But they fail as there are no required .propeties files in classpath.
How can I load this config files to Jenkins and how to make in available in test classpath?
maven-surefire-plugin allow extending classpath and passing system-properties.
So only question how to get separate directory in Jenkins hosting server and load files to this directory and create alias/placeholder/envvar per build job to refer to this path in build config.
This job can be done with SSH access, but I think that this is "wrong way". I expect that this can be done via Jenkins UI (any manager can upload file in WEB browser).
UPDATE I have no requirements for distributed slave/master builds but it whould nice to have solution that migrate configuration files to slaves automatically...
In this way sshing to host or ftp/scp - bad thing.
I read most of Jenkins docs, ask at mail list and IRC. Yea - Jenkins community is silent. At docs I found link to Config File Provider Plugin, after that I visit http://builder.evil.com/jenkins/pluginManager/available page and look for config keyword.
There are a lot related plug-ins with various usefulness to my subject (most useless first):
https://wiki.jenkins-ci.org/display/JENKINS/Envfile+Plugin - This plugin enables you to set environment variables via a file.
https://wiki.jenkins-ci.org/display/JENKINS/Credentials+Binding+Plugin - Allows credentials to be bound to environment variables for use from miscellaneous build steps.
https://wiki.jenkins-ci.org/display/JENKINS/Environment+Script+Plugin - Allows you to run a script before each build that generates environment variables for it.
https://wiki.jenkins-ci.org/display/JENKINS/EnvInject+Plugin - This plugin makes it possible to have an isolated environment for your jobs.
https://wiki.jenkins-ci.org/display/JENKINS/Copy+Data+To+Workspace+Plugin - Copies data to workspace directory for each project build.
https://wiki.jenkins-ci.org/display/JENKINS/Copy+To+Slave+Plugin - This plugin allows to copy a set of files, from a location somewhere on the master node, to jobs' workspaces. It also allows to copy files back from the workspaces of jobs located on a slave node to their workspaces on the master one.
https://wiki.jenkins-ci.org/display/JENKINS/Config+File+Provider+Plugin - Adds the ability to provide configuration files (i.e., settings.xml for maven, XML, groovy, custom files, etc.) loaded through the Jenkins UI which will be copied to the job's workspace.
Only last plug-in - Config File Provider Plugin allow editing configs via Jenkins WEB interface. And it have brother - Managed Script Plugin - for uploading/managing/editing custom scripts. No question now I use Config File Provider Plugin!
You should keep the configs required for the tests together with the rest of source code, so that after compilation, your unit tests can run.
After deploying the .war, the DevOps team should overwrite the in-war configs with whatever per-environment configs that they have.

Nuget - Pack and Publish in the same build configuration in TeamCity

I have a TeamCity build configuration where step 2 is packing a NuGet package and step 3 is publishing the NuGet package to an external build server.
At least that was my intention.
When I omit step 3 the build is success and the package is created and placed in the directory I specified in step 2.
When I include step 3 it fails because TeamCity has not yet placed the created NuGet package at the specified destination.
Am I approaching this the wrong way, do I perhaps need another build configuration with publish that depends on the first build configuration ?
It turns out when packing NuGet packages to an external resource like I first did, example \\foo-server\temp\\nuget the NuGet packages would not be copied to the destination until the build was complete, after step 3 should execute.
Solved it by just using 'nuget' as destination directory, creating a temp folder in the checkout directory of the Build Agent. By doing it that way the packages are available inside the build configuration so step 3 can access them.

Resources