Drone IO Difference between when and trigger? - continuous-integration

My need is to run drone build when create a new tag in Gitea.
I can see two options there in Drone documentation. Please find below:
When
when:
event: tag
branch: master
Trigger
trigger:
branch:
- master
ref:
include:
- refs/tags/**
Please explain the difference and suggest which option is good to take.

Drone has a concept of "Pipelines" and "Steps".
A pipeline is made up of one or more steps.
The "when" is called a condition and is used to limit step execution - i.e. a pipeline with 4 steps defined may only run 2 steps based on conditions set. - condition docs
Triggers are used to to limit whole pipeline execution - i.e. a pipeline may or may not run at all based on triggers set. - trigger docs
It sounds to me that your use case is better served by a trigger - i.e. only run this build if a tag is created.

Related

Github Actions: Image is pulled even if step is skipped?

Github actions workflow such as:
- name: Some Step
if: inputs.do_this_step
uses: some/github/action#main
When run, if the "if" doesn't trigger the step gets skipped, but the container image for the action still gets pulled at the start of the workflow. Presumably because we're evaluating the "uses:" statements first. Is there a way to avoid this? Causing efficiency issues pulling images which aren't going to get used.

How to pass variable to ADF Execute Pipeline Activity?

Environment:
I have around 100 pipelines that run on a number of triggers.
Outcome: I want to create a master pipeline that calls those 100 pipelines.
Currently, I've created a list of pipeline names and put them to an array. Then I was hoping to use forEach and execute pipeline activities to pass those names.
Issue, it seems that execute pipeline activity does not take variables or it is not obvious how to do it.
I do not want to create master pipeline manually as it can change often and I hope there must be better way to do it than manually.
You are correct that the "Invoked pipeline" setting of the Execute Pipeline activity does not support a variable value: the Pipeline name must be known at design time. This makes sense when you consider parameter handling.
One way around this is to create an Azure Function to execute the pipeline. This answer has the .Net code I leverage in my pipeline management work. It's a couple years old, so probably needs an update. If you need them to run sequentially, you'll need to build a larger framework to monitor and manage the executions, which is also discussed in that answer. There is a concurrency limit (~40 per pipeline, I believe), so you couldn't run all 100 simultaneously.

Robot Framework - Expected Failure after Prod Refresh

One of my automated test cases (TC) fails predictably after a prod refresh that takes place every few months.
For the TC to pass, there should be 'N/A' for values, which is a precondition. After getting the 'N/A' text, I do insert into a table to create values and then do other steps.
After the refresh, there are values (monies) instead of the 'N/A'.
What are the ways to avoid the failure? Run Keyword If and Run Keyword And Expect Failure would invalidate the original TC and and it will always pass, which I apparently don't need.
There might be other approaches too, however, one of the ways to approach this problem is
You can define init file in directory
__init__.robot
That suite setup and suite teardown in the file would run before anything in the underlying folders.
make use of set global variable with N/A and update the same when you see actual values. i.e every test case would verify whether the variable contains N/A or actual values(i.e not N/A), this can be done using Test Setup with keyword.
NOTE: You can also use set suite variable for the same

Is it possible to reuse godog scenarios?

I have a defined scenario in godog:
User starts a workspace with stack
Given Minishift has state "Running"
When user triggers workspace creation for stack
Then workspace should be starting
When user looks at the workspace status
Then the workspace status should be running and creation successful
and I was wondering if it was possible to reuse this scenario for multiple stacks? Ideally, I would reuse this scenario for every stack and if that stack failed then I would fail that scenario but not all the tests. Each stack is independent of the others. I'm not sure if this is possible or if I have to manually define each stack as a scenario and do it that way.
Scenario Outline with Examples (as documented here for behat, but also implemented in Godog) does what you describe:
Scenario Outline: User starts a workspace with stack
Given "<stack>" has state "Running"
When user triggers workspace creation for stack
Then workspace should be starting
When user looks at the workspace status
Then the workspace status should be running and creation successful
Examples:
| stack |
| Minishift |
| Redshift |
| Lateshift |
Your scenario will be called three times, with the parameters [Minishift, Running]; [Redshift, Running] and finally [Lateshift, Running] being passed to the first step.
All of your steps in this scenario can be summarized as Given minishift is "running" at workspace named "stack". The point is that this step definition can invoke all of your steps defined in the scenario described. And that is an usual practice in order to prevent redundant context in multiple scenarios.
The point in general is that, you can combine stack of steps into a single precondition and that is an usual composition of steps, which is an advocated practice.
Further more, I've noticed you have two When steps in your scenario definition and it is not something I would suggest. When is a single action and in order to prevent ambiguity, you should not have two actions in scenario, it cannot be clear, what exactly your scenario is about or why it fails exactly if it would. It just means in this case that you have two scenarios here instead of one.
Make your scenarios clear and small, it should be resusable and composable.
Another option is to #tag your scenarios needing minishift running for example:
#minishift
Scenario: expected behavior
When I do that action
Then I expect this outcome
Then you can use godog before scenario hook in order to start the minishift as expected.

Run Teamcity configuration N times

In the set of my TeamCity configurations, I decided to make something like an aging test*. And run a single configuration for a 100 times.
Can I make in a few simple clicks?
*aging test - test that is showing, that due time/aging, results will not be changed.
As of now, this is not possible from UI. If you run one build configuration few times without any changes, they will be merged and only 1 will be executed. If you want to run 100, you have to trigger them one by one, after the previous one finished executing.
But the better solution is to trigger builds from script using REST API (for more details see the documentation here), if builds have different values in custom parameters they all will be put in the queue.
HOW: Define a dummy custom parameter, and trigger the build from script within a loop. Pass the value of iterating variable as parameter value. So, TeamCity will think those are different builds and execute all of them.

Resources