How can I create several executors for a job in Circle CI orb? - continuous-integration

NOTE: The actual problem I am trying to solve is run testcontainers in Circle CI.
To make it reusable, I decided to extend the existing orb in my organisation.
The question, how can I create several executors for a job? I was able to create the executor itself.
Executor ubuntu.yml:
description: >
The executor to run testcontainers without extra setup in Circle CI builds.
parameters:
# https://circleci.com/docs/2.0/configuration-reference/#resource_class
resource-class:
type: enum
default: medium
enum: [medium, large, xlarge, 2xlarge]
tag:
type: string
default: ubuntu-2004:202010-01
resource_class: <<parameters.resource-class>>
machine:
image: <<parameters.tag>>
One of the jobs itself:
parameters:
executor:
type: executor
default: openjdk
resource-class:
type: enum
default: medium
enum: [small, medium, medium+, large, xlarge]
executor: << parameters.executor >>
resource_class: << parameters.resource-class >>
environment:
# Customize the JVM maximum heap limit
MAVEN_OPTS: -Xmx3200m
steps:
# Instead of checking out code, just grab it the way it is
- attach_workspace:
at: .
# Guessing this is still necessary (we only attach the project folder)
- configure-maven-settings
- cloudwheel/fetch-and-update-maven-cache
- run:
name: "Deploy to Nexus without running tests"
command: mvn clean deploy -DskipTests
I couldn't find a good example of adding several executors, and I assume that I will need to add ubuntu and openjdk for every job. Am I right?
I continue looking into other orbs and documentation but cannot find a similar case to mine.

As it is stated in Circle CI documentation, executors can be defined like this:
executors:
my-executor:
machine: true
my-openjdk:
docker:
- image: openjdk:11
Side note, there can be many executors of any type such as docker, machine (Linux), macos, win.
See, StackOverflow question how to invoke executors from CircleCI orbs.

Related

How to resolve "input ConnectedServiceName expects a service connection of type AzureRM" error?

I am learning how to create azure pipeline and ran into the following error:
The pipeline is not valid. Job Phase_1: Step
AzureResourceGroupDeployment input ConnectedServiceName expects a
service connection of type AzureRM but the provided service connection
"MY-SERVICE-CONNECTION-NAME" is of type generic.
What am I missing here?
azure-pipelines.yml
# Starter pipeline
# Start with a minimal pipeline that you can customize to build and deploy your code.
# Add steps that build, run tests, deploy, and more:
# https://aka.ms/yaml
trigger:
branches:
include:
- master
paths:
include:
- cosmos
batch: True
jobs:
- job: Phase_1
displayName: Phase 1
cancelTimeoutInMinutes: 1
pool:
vmImage: ubuntu-latest
steps:
- checkout: self
- task: AzureResourceGroupDeployment#2
displayName: Azure Deployment:Create Or Update Resource Group action on DISPLAY-NAME
inputs:
# azureSubscription: 'SUBSCRIPTION'
ConnectedServiceName: MY-SERVICE-CONNECTION-NAME
resourceGroupName: DISPLAY-NAME
location: West US # TBD
csmFile: cosmos/deploy.json
csmParametersFile: cosmos/parameters-dev.json
deploymentName: DEPLOYMENT-NAME
I tried values from "service connections" but not sure what is the issue here.
The error message is telling you the exact problem. Your service connection needs to be an Azure Resource Manager service connection. Create a service connection of the appropriate type.
I can reproduce the issue:
As Daniel said, this is caused by the service connection type.
From this document you can know what the parameters are:
https://github.com/microsoft/azure-pipelines-tasks/blob/master/Tasks/AzureResourceGroupDeploymentV2/README.md#parameters-of-the-task
Share a little trick. Can help you avoid this type of problem in the future. When you type '- task: sometask#version' in the correct place of YML file of the pipeline in DevOps, you will see a 'Settings' button in the upper left, click it and you can set the value through the UI, which can filter the appropriate options for you:

Unable to deploy pre built image in app engine standard environment (GCP)

My spring boot application was working fine in cloud build & deployed without any issue till September.
Now my trigger fails in gcloud app deploy.
Step #4: ERROR: (gcloud.app.deploy) INVALID_ARGUMENT: Deployment cannot use a pre-built image. Pre-built images are only allowed in the App Engine Flexible Environment.
app.yaml
runtime: java11
env: standard
service: service
handlers:
- url: /.*
script: this field is required, but ignored
cloudbuild.yaml
steps:
# backend deployment
# Step 1:
- name: maven:3-jdk-14
entrypoint: mvn
dir: 'service'
args: ["test"]
# Step 2:
- name: maven:3-jdk-14
entrypoint: mvn
dir: 'service'
args: ["clean", "install", "-Dmaven.test.skip=true"]
# Step 3:
- name: docker
dir: 'service'
args: ["build", "-t", "gcr.io/service-base/base", "."]
# Step 4:
- name: "gcr.io/cloud-builders/docker"
args: ["push", "gcr.io/service-base/base"]
# Step 5:
- name: 'gcr.io/cloud-builders/gcloud'
dir: 'service/src/main/appengine'
args: ['app', 'deploy', "--image-url=gcr.io/service-base/base"]
timeout: "30m0s"
# Step 6:
# dispatch.yaml deployment
- name: "gcr.io/cloud-builders/gcloud"
dir: 'service/src/main/appengine'
args: ["app", "deploy", "dispatch.yaml"]
timeout: "30m0s"
timeout: "100m0s"
images: ["gcr.io/service-base/base"]
Cloud build error
Thanks in advance. Im confused how my build was working fine before & what am i doing wrong now.
You can't deploy custom container on App Engine standard. You have to provide your code and the environment runtime. Then Buildpack is used to create a standard container on Google Side (for information, a new Cloud Build job is ran for this) and deployed on App Engine.
I recommend you to have a look to Cloud Run to use your custom container. It's very close to App Engine (and even better on many points!) and very customizable.
What your cloudbuild.yaml comment's refer to as Step 5 corresponds to the Step #4 in the error because system begins numbering steps from 0.
The error message is accurate; App Engine standard (!) differs from App Engine flexible in that the latter (flexible) permits container image deployments. App Engine standard deploys from sources.
See Google's example.
It's possible that something has changed Google's side that's causing the issue but, the env: standard in your app.yaml suggests the build file has changed.

AzureDevops agent must finish current build before taking an other one

Situation
I'm working on an AzureDevops Server 2020 with only one agent.
I have 2 build pipelines:
Build pipeline (yaml)
Merge pipeline (yaml)
Each pipeline contains multiple stages that contains only one job (because each task of the stage must run on same agent).
Current behavior
If I run the two pipelines at the same time, the agent run the two in a "fake" parallel that make the two builds very slow.
Exemple of agent process order:
build-stage1, build-stage2, merge-stage1, merge-stage2, merge-stage3, merge-stage4, build-stage3...
Wanted behavior
This is not something unexpected if we have more agents than build executions. But this will never be my case.
So I will prefer to lock the agent for the current build (like built-in in Jenkins).
Exemple of agent wanted process order:
build-stage1, build-stage2, build-stage3, build-stage4, build-stage5(latest), merge-stage1, merge-stage2, merge-stage3, merge-stage4, merge-stage5(latest)
Is it possible to set the agent work attribution policy ?
This occurs because a job is not added to agent queue if it depends on something (and by default it depends on previous stage).
Using dependsOn: [] let Azure Devops know that it depends on nothing so each jobs are added to the queue and are executed in FIFO order.
Agree with Dom.
Based on my test, I could reproduce this behavior. But I am afraid that there is no method to lock the agent to complete the pipeline before taking another one.
For a workaround:
You could use Pipeline trigger to trigger the merge Pipeline.
Build Pipeline:
pool:
name: Default
stages:
- stage: Build_Stage1
displayName: Stage1
....
- stage: Build_Stage2
displayName: Stage2
dependsOn: Build_Stage1
....
- stage: Build_Stage3
displayName: Stage3
dependsOn: Build_Stage2
....
Merge Pipeline:
resources:
pipelines:
- pipeline: TestTrigger
source: ABC
trigger:
branches:
- '*'
pool:
name: Default
stages:
- stage: Merge_Stage1
displayName: Merge1
...
- stage: Merge_Stage2
displayName: Merge2
dependsOn: Merge_Stage1
...
- stage: Merge_Stage3
displayName: Merge3
dependsOn: Merge_Stage2
...
In this case, you could queue the Build Pipeline alone. Then the Merge Pipeline will be triggered after completing the Build Pipeline.
The Process: build-stage1, build-stage2, build-stage3, build-stage4, build-stage5(latest), -> Tiggeer -> merge-stage1, merge-stage2, merge-stage3, merge-stage4, merge-stage5(latest)
On the other hand, this requirement is valuable.
To get this feature, you could add your request for this feature on our UserVoice site, which is our main forum for product suggestions. Thank you for helping us build a better Azure DevOps.

Google Cloud Build - How to Cache Bazel?

I recently started using Cloud Build with Bazel.
So I have a basic cloudbuild.yaml
steps:
- id: 'run unit tests'
name: gcr.io/cloud-builders/bazel
args: ['test', '//...']
which runs all tests of my Bazel project.
But as you can see from this screenshot, every build takes around 4 minutes, although I haven't touched any code which would affect my tests.
Locally running the tests for the first time takes about 1 minute. But running the tests a second time, with the help of Bazels cache, it takes only a few seconds.
So my goal is to use the Bazel cache with Google Cloud Build
Update
As suggested by Thierry Falvo I'v looked into those recommendations. An thus I tried to the add the following to my cloudbuild.yaml:
steps:
- name: gcr.io/cloud-builders/gsutil
args: ['cp', 'gs://cents-ideas-build-cache/bazel-bin', 'bazel-bin']
- id: 'run unit tests'
name: gcr.io/cloud-builders/bazel
args: ['test', '//...']
- name: gcr.io/cloud-builders/gsutil
args: ['cp', 'bazel-bin', 'gs://cents-ideas-build-cache/bazel-bin']
Although I created the bucket and folder, I get this error:
CommandException: No URLs matched
I think that rather than cache discrete results (artifacts), you want to use GCS (cloud storage) as a bazel remote cache.
- name: gcr.io/cloud-builders/bazel
args: ['test', '--remote_cache=https://storage.googleapis.com/<bucketname>', '--google_default_credentials', '--test_output=errors', '//...']

Configurable Replica Number in Template using ValueFrom

I have a template.yml file that is used when deploying to any OpenShift project. Each project has specific project-props configMap to be used, this is part of our CICD pipeline, so each project has a unique project.props available to it
I would like to be able to control the number of replicas and CPU/Memory limits based on what project I am deploying to. For example a branch testing OpenShift project vs Performance testing OpenShift project would have a different CPU request and limit than an ephemeral OpenShift project.
My template.yml file looks something like this:
// <snip>
spec:
replicas: "${OS_REPLICAS}"
// <snip>
resources:
limits:
cpu: "${OS_CPU_LIMIT}"
memory: "${OS_MEMORY_LIMIT}"
requests:
cpu: "${OS_CPU_REQUEST}"
memory: "${OS_MEMORY_REQUEST}"
// <snip>
parameters:
- name: OS_REPLICAS
displayName: OS Number of Replicas
valueFrom:
configMapKeyRef:
name: project-props
key: os.replicas
// rest of params are similar
My relevant project-props section is:
os.replicas=2
os.cpu.limit=2
os.cpu.request=250m
os.memory.limit=1Gi
os.memory.request=1Gi
When I try to deploy this I get the following error:
quantities must match the regular expression '^([+-]?[0-9.]+)([eEinumkKMGTP]*[-+]?[0-9]*)$'
If I change template.yml to have a parameter defined it works fine
parameters:
- name: OS_REPLICAS
displayName: OS Number of Replicas
value: 2
It seems that valueFrom vs value has a different behavior. Is this impossible to do using valueFrom? Is there another way I can dynamically change spec and resources using a configMap?
The alternative is to deploy and then use oc scale dc <deploy_config_name> --replicas=<number> but it's not very elegant.
Where you have:
spec:
replicas: "${OS_REPLICAS}"
you should have:
spec:
replicas: "${{OS_REPLICAS}}"
With template parameter of:
parameters:
- name: OS_REPLICAS
displayName: OS Number of Replicas
value: 2
See:
https://docs.openshift.org/latest/dev_guide/templates.html#writing-parameters
for use of "${{}}".
What it does is interpret the contents of the parameter as JSON/YAML, rather than a string value. This allows you to supply an integer, which replicas requires.
So you don't need valueFrom, which wouldn't work anyway as that is only usable for environment variables and not arbitrary fields like replicas.
As to trying to set a default for memory and CPU for pods deployed in a project, you should look at having a LimitRange resource defined against the project and set a default.
https://docs.openshift.com/container-platform/3.5/dev_guide/compute_resources.html#dev-limit-ranges
I figured out the answer, it's does not read the values from the file but at least they can be dynamic.
OpenShift has an oc process command that you can be run when using a template.
So this works by doing:
oc process -f <template_name>.yaml -v <param_name>=<param_value>
This will over write the parameter value with the one being inserted by -v.
An actual example would be
oc process -f ./src/main/openshift/service.template.yaml -v OS_REPLICAS=2
You can read more about it OpenShift template documentation
It seems that the OS Origin team does not want to support using files for parameter insertion. You can read more about it here:
https://github.com/openshift/origin/pull/10952
https://github.com/openshift/origin/issues/10687

Resources