Setting a DevOps task drop down option from a variable - google-play

I am writing a release pipeline to upload an apk (installer) to Google Play. I am using the Google Play - Release task to do this. This is in a classic pipeline (our code is in TFS.)
One of the options is the Track to upload the apk to. The options are:
I want to set this option based on a variable that is set in one of the previous tasks. I have a previous task that sets the release.task variable to either Internal test or Production based on whether it is a public release or not. I am using it in the Google Play task like this:
However when I run the pipeline it does not recognise the value, even though it is one of the valid options:
Is there a way to get around this? I need to control which track the pipeline writes to based on a value in our code base.

I've found the issue with this scenario - you can set the value from a variable but the values are different (for this task) from those on display in the UI. For the two tracks I am using the values are:
Production -> production (note the difference in the case)
Internal test -> internal

Related

Query Wildfly for a value and then use that in a CLI script

I have an Ansible script to update and maintain my WildFly installation. One of my tasks in this setup is managing the MySQL-driver and in order to perform an update on that driver, I have to first disable the application that uses that driver, before I can replace the it and set up all my datasources anew.
My CLI script starts with the following lines:
if (outcome == success) of /deployment=my-app-1.1.ear:read-resource
deployment disable my-app-1.1.ear
end-if
My problem is that I am here very depending on the actual name of the application and that name can change over time since I have my version information in there.
I tried the following:
set foo=`ls /deployment`
deployment disable $foo
It did not work since when I look at foo I see that it was not my-app-1.1.ear but ["my-app-1.1.ear"] -- so I feel that I might be going in the right direction, even though I have not got it right.

how does a gradle task explicitly set itself having altered it's output or up to date for tasks dependent on it

I am creating a rather custom task that processes a number of input files and outputs a different number of output files.
I want to check the dates of the input files against the existing output files and also might look at the content of the input files to make the determination whether it is up to date or needs to be invoked to become up to date. What properties do I need to set in a doFirst, code the main action, or whatever ( where and when) in my task to set the right state for their dependency checker and task executor so as either appropriately force dependents to build or not.
Also any doc on standard lib utilities to do things like file date checking etc, getting lists of files etc that are easy like in ruby rake.
How do I specify the inputs and outputs to the task ? Especially as the outputs will not be known until the source is parsed and the output directory is scanned for what exists.
a sample that does this in a larger project that has tasks that are dependent on it would be really nice :)
What properties do I need to set in a doFirst, code the main action, or whatever ( where and when) in my task to set the right state for their dependency checker and task executor so as either appropriately force dependents to build or not.
Ideally this should be done as a custom task type. None of this logic should be in any of the Gradle files at all. Either have the logic in a dedicated plugin project that gets published somewhere which you can then reference in the project, or have the logic in buildSrc.
What you are trying to develop is what is known as an incremental task: https://docs.gradle.org/current/userguide/custom_tasks.html#incremental_tasks
These are used heavily throughout Gradle which makes the incremental build of Gradle possible: https://docs.gradle.org/current/userguide/more_about_tasks.html#sec:up_to_date_checks
How do I specify the inputs and outputs to the task ? Especially as the outputs will not be known until the source is parsed and the output directory is scanned for what exists.
Once you have your tasks defined and whatever else you need, in your main Gradle files you would configure them as you would any other plugin or task.
The two links above should be enough to help get you started.
As for a small example, I developed a Gradle plugin that generates files based on some input that is not known until its configured. The 'custom' task type just extends the provided JavaExec. The custom task is Wsdl2java. Then based on user configuration, tasks get registered here using the input file from the user. Since I reused built-in task types, I know for sure that no extra work will done and can rely on Gradle doing the heavy lifting. There's also a test to ensure that configuration cache works as expected: ConfigurationCacheFunctionalTests.
As I mentioned earlier, the links above should be enough to get you started.

How to isolate multiple Teamcity agents from getting picked up by a specific job

I would like my build job to not build on specific teamcity agents and there is no common pattern for the build agent names. Is there a way I can isolate multiple agents from getting picked up by a single job.
For example I can make sure that the build job does not pick up 1 agent using the following parameter.
teamcity.agent.name
does not match
agent-001
How can I similarly tell the teamcity job to not run on the following agents as well.
"123-agent"
"my_agent"
"test_agent"
"agent_do_not_use"
I cannot use the same parameter, teamcity.agent.name with does not match for multiple agents.
Can you all teamcity experts help me out here please on what is the best way to achieve this.
You can add agent requirement with "does not match" condition which accepts regular expression and set it to:
123-agent|my_agent|test_agent|agent_do_not_use
Using an agent requirement based on presence (or absence) of a specific property coming from agent's buildAgent.properties file would probably be a better solution to using agent names in the requirement.
Alternative means to manage agent's compatibility are: use agent pools and use agent's Compatible Configurations set to a limited set.
You can add a specific parameter inside the agent configuration on the local machine inside: C:\BuildAgent\conf\buildAgent.properties
Then, you can add something specific like: system.Is<MyFeature>Available=True
Then, in teamcity configuration, you will add an Agent Requirement with this parameter.

How to enqueue more than one build of the same configuration?

We are using two teamcity servers (one for builds and one for GUI tests). The gui tests are triggered from http GET in the last step of the build. (as in http://confluence.jetbrains.com/display/TCD65/Accessing+Server+by+HTTP)
The problem is, that there is only one configuration of the same kind in the queue at the same time. Is there a way to enable multiple starts of the same configuration? Can I use some workaround like sending a dummy id?
At the bottom the section "Triggering a Custom Build" here: http://confluence.jetbrains.com/display/TCD65/Accessing+Server+by+HTTP, you can find information about passing custom parameters to the build.
Just define some unused configuration parameter, like "BuildId", and pass, for example, current date (guid will work as well) to it
...&buildId=12/12/23 12:12:12

Dynamically identify a Jenkins build

Currently, I am initiating a build by posting a few parameters to Jenkins from a shell script. I need to check whether the build succeeded or failed and I was wanting to avoid using the post build Jenkins script calls (I don't want Jenkins to initiate the running of any scripts on my server), so the idea was to post to Jenkins every 10 seconds or so (while building != false) in order to get the JSON object with the various build parameters. While this is working fine if I know the build number of the build I want to check on, I can't seem to see a good way to dynamically keep track of the current build number and make sure my script is checking on the build it just initiated and not some other build currently running.
Potentially, there could be multiple builds initiated within a short period of time, so posting to jenkins/job/my_build_job/lastBuild/api/json just after starting the build and checking the number that way doesn't seem appropriate given problems with race situations.
How can I keep track of a particular build dynamically from a script on my server in order to check the build success or failure of a build initiated from a post called by cron? Is there perhaps a way to name a build so I could initiate it with BUILD_NAME and then post to jenkins/job/my_build_job/BUILD_NAME/api/json?
There are a couple of different API calls you can make:
jenkins/job/my_build_job/api/json?tree=lastBuild[number]
will give you either the last completed build or the current build in progress
jenkins/job/my_build_job/api/json?tree=nextBuildNumber
will give you the next build number - this includes builds that are queued up waiting for resources.
There is already an issue filed in Jenkins to return the build number in the Jenkins remote API call: https://issues.jenkins-ci.org/browse/JENKINS-12827. Please add comments there so it can be worked on as soon as possible.

Resources