Say, I have a tag based pipeline on bitbucket like this
pipelines:
tags:
v*:
- step:
I don't want to run pipelines on PATCH version changes semver.
For example when going from tag v1.0.0 to v1.0.1 I don't want to run pipelines. Pipelines should only run when there is a MINOR or MAJOR change like. v1.0.0 to v1.1.0 or v2.0.0.
How do I do this?
Is there a [skip ci] equivalent when pushing tags to indicate I don't want pipeline to run for this tag?
Unfortunately, I cannot think of an easy way to achieve such behaviour, but can present an idea of a workaround.
First and foremost, we cannot specify patterns to skip the pipeline, only patterns to trigger it. After we entered the step, we can check a condition we're interested in, and perform early graceful exit if this condition is met, like this:
script:
- <check if tag is a PATCH update> && echo "Will not run the step for a PATCH update; gracefully exiting" && exit 0
Be advised that your step will still technically be triggered, so its startup time will be affecting your quotas.
Second, is what this <check if tag is a patch update> actually looks like.
Bitbucket exposes the name of the tag that triggered the build in the environment variable called BITBUCKET_TAG (see this doc for details).
However, this name itself is not enough to decide if you want to run the step or not. You will need to:
Retrieve the value of the previous tag using Bitbucket REST API. This endpoint looks like something you can make use of.
Compare it to $BITBUCKET_TAG in the CLI using bash, or something friendlier, like Python, if it's included into your build environment.
By 'compare' I mean checking if the tag update was PATCH or not. Let me omit the implementation details :)
This is cumbersome, up to unfeasible depending on the lifecycle of your repository, but you could write an alternative pipeline that does nothing for minor patches on your current minor tag and bump the definition in the very yaml key along any other you may perform.
definitions:
yaml-anchors:
- &your-step
script:
- do your thing
- &noop-step
script:
- exit 0
pipelines:
tags:
v1.n.*: # bump me!
- step: *noop-step
v*:
- step: *your-step
The same question has been answered here:
https://community.atlassian.com/t5/Bitbucket-questions/bitbucket-skip-pipeline-based-on-tag/qaq-p/2008586
The gist is that [skuip ci] in the commit message causes no pipeline to run.
Related
I have a build configuration that runs on multiple branches. I would like to set a failure condition that compares code coverage numbers, but only within the current branch.
What i've done:
Added a build step that sets a custom parameter "CurrentBranch" to the value of the current branch (either %teamcity.build.branch% or %teamcity.pullRequest.target.branch% if its a PR build)
Added a build step that successfully adds a tag to the build (using the REST API) with the value of %CurrentBranch%.
Added a failure condition that compares the current code coverage with the last successful build with tag %CurrentBranch%
However, when I execute the build, I get a warning:
Invalid settings for build failure on metric
buildTag: Tag must be specified
If i explicitly set the tag in the "failure condition" properties to one of the branches, it works as expected.
Anyone know what I'm doing wrong?
I've also tried using the original %teamcity.build.branch% parameter. That did not produce the warning. So it seems to support parameters, but not parameters set during the build?
Edit: If I initialize my custom %CurrentBranch% parameter with %teamcity.build.branch% I no longer get the warning. Will need to check if (in the case of pull-requests) it actually uses the correct branch for metrics comparison.
The behavior still seems odd. If anyone can shine a little light on this, I'd be very thankful!
Sorry I'm a little new to Ansible and don't quite understand playbooks completely, and was wondering if someone would be able to help me identify and fix my issue. I want to run a command which checks if a service is enabled, and returns "true" in stdout, and this is what I have so far.
Side note (incase the command seems confusing): The first parameter is the location of a binary that I must use, followed by the parameters I must provide the binary to receive my result.
command:
/opt/tableau/tableau_server/packages/customer-bin.123/tsm configuration get -k service.jmx_enabled:
exit-status: 0
stdout:
- "True"
Unfortunately this test case seems to be failing and produces the result: "stdout: patterns not found: [true]" and I have no idea what I'm doing wrong. If someone could look this over for me that would be awesome!
EDIT: The default_test.yml playbook is run by molecule when testing roles.
take a look here
https://linuxhint.com/print-command-output-ansible/
perhaps you want debug to output the result
Having investigated the issue further, I was able to uncover that Ansible STDOUT comparisons – using the method highlighted in the original question – is a case-sensitive process, and does not account for differences in capitalisation and casing. Instead it expects an exact match when aiming to identify the "pattern" that was being used. In my case, the execution of the command returned "true" and not "True", and so by simply modifying the expected-pattern I was able to resolve the issue. If the execution of this step is not dependent on a testing suite or CI/CD pipeline such as Atlassian Bamboo (or others), I do suggest taking a look into Andylondon's answer, which recommends registering the output of STDOUT to a variable, which can then be used to debug any issues.
I am versioning Microsoft Access VBA code, which is in general case insensitive. However changes the case of variable names happen every now in then (by the Access compiler or by the developer). This often leads to huge change set in my git workspace.
How can I revert or ignore changes, that only concern upper- or lowercase of file contents?
An example:
git init
echo "public sub example()\nend sub" > mdlExample.ACM
# ^-- lower e
git add --all
git commit --all --message "Initial Commit"
echo "public sub Example()\nend sub" > mdlExample.ACM
# ^-- upper E
I would love something like:
git restore --only-case-changes # not working
And then:
git status
> On branch master
> nothing to commit, working tree clean
Consider changing example="example" to Example="Example". How do you propose Git could decide which case change to ignore here? Now consider code snippets in comments, or stored as strings for code generators. I get wanting Git to make an annoying chore go away, but I think if you try to imagine telling Git exactly what you want you'll understand the context of your question a little better.
How can I revert or ignore changes, that only concern upper- or lowercase of file contents
When you want to temporarily ignore changes, when you want to do a diff or a blame without seeing those changes, you can use a "textconv" filter that normalizes the text you diff. I use those to do things like strip embedded timestamps out of generated html when diffing, quickest to hand atm is
[diff "doc-html"]
textconv = "sed 's,<span class=\"version\">Factorio [0-9.]*</span>,,;s,<[^/>][^>]*>,\\n&,g'"
wordRegex = "<[^>]*\\>|[^< \\t\\n]*"
in .git/config, and
doc-html/*.html diff=doc-html
*.cfg -diff
in .git/info/attributes.
so my what-changed diffs don't show me things I don't care about.
If you want to see the results of a diff ignoring case, try
[diff "nocase"]
textconv="tr A-Z a-z"
and drop * diff=nocase (or maybe*.vba diff=nocase) into .git/info/attributes. When you're done, take it out.
but for merging, my leadoff example should convince you that Git automatically and silently making case changes in repo content, even just in the text that looks like identifiers, is a Bad Idea. When there's a conflict, not just a one-sided change but two different changes, it's still going to take some human judgement to decide what the result should be. Fortunately, with any decent merge tool, resolving simple conflicts is down around subsecond range each.
You don't have to git restore anything: You could setup a clean content filter driver as illustrated here, which will automatically convert those cases on git diff/git commit.
That means:
you won't even see there is a diff
you won't add /commit anything because of that content filter driver.
Image from "Keyword Expansion" section of the "ProGit book"
This is done through:
a .gitattributes filter declaration, which means you can associate it only to certain files (through, for instance, their extension)
*.ACM filter=ignoreCase
a local git config filter.<driver>.clean to declare the actual script (which should be in your PATH)
git config filter.ignoreCase.clean ignoreCase.sh
# that give a .git/config file with:
[filter "ignoreCase"]
clean = ignoreCase.sh
The trick is:
Can you write a script which takes the content of an ACM file as input and produces the same ACM file as output, but with its strings converted?
You can have the filename in those scripts so you can do a diff and detect if said difference has to be adjusted, but you still need to write the right command to replace only "xxx" strings when their case changes in ACM files.
Note: jthill suggests in the comments to set the merge.renormalize config settings to tell Git that canonical representation of files in the repository has changed over time.
Have you considered the answer from this StackOverflow question:
How to perform case insensitive diff in Git
Maybe you can write a script to go and do the diff comparison for each commit and then add those commits to your branch. It may not be as simple as you like but maybe it will simplify the display of the changes to allow you to get to the case insensitive changes quicker?
I have two cucumber feature ( DeleteAccountingYear.feature and AddAccountingYear.feature).
How can i do to make that the second feature(AddAccountingYear.feature) run before the first one (AddAccountingYear.feature).
I concur with #alannichols about tests being independent of each other. Thats a fundamental aspect of automation suite. Otherwise we will end up with a unmaintainable, flaky test suite.
To run a certain feature file before running another feature appears to me like a test design issue.
Cucumber provides few options to solve issues like this:
a) Is DeleteAccountingYear.feature really a feature of its own? If not you can use the cucumber Background: option. The steps provided in the background will be run for each scenario in that feature file. So your AddAccountingYear.feature will look like this:
Feature: AddingAccountingYear
Background:
Given I have deleted accounting year
Scenario Outline: add new accounting year
Then I add new account year
b) If DeleteAccountingYear.feature is indeed a feature of its own and needs to be in its own feature file, then you can use setup and teardown functions. In cucumber this can be achieved using hooks. You can tag AddDeleteAccountingYear.feature with a certain tag say #doAfterDeleteAccountYear. Now from the Before hooks you can do the required setup for this specific tag. The before hooks(for ruby) will look like:
Before('#doAfterDeleteAccountYear') do
#Call the function to delete the account year
end
If the delete account year is written as a function, then the only thing required is to call this method in the before hook. This way the code will be DRY compliant as well.
If these options doesn't work for you, one another way of forcing the order of execution is by using a batch/shell script. You can add individual cucumber commands for each feature in the order you would like to execute and then just execute the script. The downside of it is different reports will be generated for each feature file. But this is something that I wouldn't recommend for the reasons mentioned above.
From Justin Ko's website - https://jkotests.wordpress.com/2013/08/22/specify-execution-order-of-cucumber-features/ the run order is determined in the following way:
Alphabetically by feature file directory
Alphabetically by feature file name
Order of scenarios within the feature file
So to run one feature before the other you could change the name of the feature file or put it in a separate feature folder with a name that alphabetically first.
However, it is good practice to make all of your tests independent of one another. One of the easiest way to do this is to use mocks to create your data (i.e. the date you want to delete), but that isn't always an option. Another way would be to create the data you want to delete in the set up of the delete tests. The downside to doing this is that it's a duplication of effort, but it won't matter what order the tests run in. This may not be an issue now, but with a larger test suite and/or multiple coders using the test repo it may be difficult to maintain the test ordering based solely on alphabetical sorting.
Another option would be to combine the add and delete tests. This goes against the general rule that one test should test one thing but is often a pragmatic approach if your tests take a long time to run and adding the add data step to the set up for delete would add a lot of time to your test suite.
Edit: After reading that link to Justin Ko's site you can specify the features that are run when you run cucumber, and it will run them in the order that you give. For any that you don't care about the order for you can just put the whole feature folder at the end and cucumber will run through them, skipping any that have already been run. Copy paste example from the link above -
cucumber features\folder2\another.feature features\folder1\some.feature features
How do I conditionally skip a scenario?
For example, I wish to continue a scenario only if certain conditions are met, but I do not want it to register as a failure if it's not present.
This is an issue I had. The tests I write are against a UI that has a constantly changing BE database that I am currently unable to have static data in.
This means that some times it is possible that there is no data for the test.
Not a pass not a fail, just unable to run.
The way that I found to work best was to invoke a cucumber pending.
example test:
Scenario: Test the application
Given my application has data
When I test something
Then I get a result
example step def:
Given /^my application has data$/ do
pending unless application.has_data?
end
These are the kind of results I can see:
201 scenarios (15 pending, 186 passed)
1151 steps (15 pending, 1136 passed)
It's worth noting that I have extra debugging and have these tests tagged so that at any time I can run these pending tests again.
Hope this helps,
Ben.
For anyone still looking for an answer to this:
Apart from using pending, or a specific profile to skip scenarios with certain tags, there are at least 2 more ways to achieve this.
I can understand why you would need this, as I had a similar problem and got a solution, hence worth sharing.
In my case, I had a piece of functionality expected to be available on 3/10 devices, and expected to be not available on the remaining 7.
Caveats with using 'pending' to skip:
Since the tests and code were implemented, it didn't feel right to mark steps as pending.
It caused confusion, as it was difficult to distinguish really pending scenarios from skipped but marked pending scenarios at the end of a run
Some CI jobs (Jenkins/Hudson) might be configured to fail for pending scenarios, hence causing more trouble.
So, I rather wanted to just skip them during execution depending on the condition of which browser is being used. I also didn't want to have too many profiles specific to certain browsers/devices
Solution:
Use cucumber.yml to skip tagged scenarios conditionally
Here's a known ignored interesting fact about cucumber (from https://github.com/cucumber/cucumber/wiki/cucumber.yml):
The cucumber.yml file is preprocessed by ERb; this allows you to use ruby code to generate values in the cucumber.yml file
Building on this, tag your scenarios with something unique, say #conditional
At the beginning of your cucumber config (cucumber.yml), apply your conditional logic outside of any profiles mentioned:
<% included = (ENV['BROWSER'] =~ /chrome/) ? "-t #conditional" : "-t ~#conditional" %>
included is just a variable, which will have a value of tags to include/exclude depending on the condition
Now use this conditional variable in the default profile
default: <%= included %>
So now your default profile will use the included/excluded tests as identified by your conditional logic.
(More complicated and not elegant) Use rake tasks for cucumber execution:
Conditionally choose tags to include/exclude within your rake task, and pass them to cucumber execution.
Hope this helps.
You could check the condition before you start cucumber, then use a profile that would skip the scenarios with certain tags. Put this in your cucumber.yml:
default: --tags ~#wip --tags ~#broken --no-source --color
limited: --tags #core --tags ~#wip --tags ~#broken --no-source --color
Replace #core with whatever tag you use for the cukes you want to run (or use ~ to exclude cukes). Then run the limited profile from a shell script that checks the conditions:
cucumber -p limited
Please see this solution which truly skips the scenario instead of trowing a pending error:
Before do |scenario|
scenario.skip_invoke!
end
I am tagging my scenarios, and then in my "step_definitions/hooks.rb" file, I have something like this:
Before('#proxy') do
skip_this_scenario unless proxy_running?
end
scenario.skip_invoke! which was mentioned in another answer seems to be deprecated.