I need to replace a cron entry in a file using sed or awk.
tried this : didnt work
sed -i 's/0 0 * * 0/0 1 * * 1/g' script.sh
script.sh
#!/bin/bash
mkdir -p .github/workflows
cd .github/workflows
touch semgrep.yml
cat << EOF > semgrep.yml
name: Semgrep
on:
pull_request: {}
push:
branches:
- master
- main
paths:
- .github/workflows/semgrep.yml
schedule:
- cron: '0 0 * * 0'
jobs:
semgrep:
name: Static Analysis Scan
runs-on: ubuntu-18-04
Kindly help me with the same .
Using mikefarah/yq to edit the file in place (-i):
yq -i '.on.schedule[].cron = "0 1 * * 1"' semgrep.yml
would turn a semgrep.yml containing
name: Semgrep
on:
pull_request: {}
push:
branches:
- master
- main
paths:
- .github/workflows/semgrep.yml
schedule:
- cron: '0 0 * * 0'
jobs:
semgrep:
name: Static Analysis Scan
runs-on: ubuntu-18-04
into one containing
name: Semgrep
on:
pull_request: {}
push:
branches:
- master
- main
paths:
- .github/workflows/semgrep.yml
schedule:
- cron: '0 1 * * 1'
jobs:
semgrep:
name: Static Analysis Scan
runs-on: ubuntu-18-04
Related
I have this pipeline file:
# Starter pipeline
# Start with a minimal pipeline that you can customize to build and deploy your code.
# Add steps that build, run tests, deploy, and more:
# https://aka.ms/yaml
trigger:
branches:
include:
- main
- issues*
- tasks*
paths:
exclude:
- documentation/*
- Readme.md
variables:
- name: majorVersion
value: 1
- name: minorVersion
value: 0
- name: revision
value: $[counter(variables['minorVersion'],0)]
- name: buildVersion
value: $(majorVersion).$(minorVersion).$(revision)
name: $(buildVersion)
and I expect the pipeline name to be 1.0.0
but instead it is a string $(majorVersion).$(minorVersion).$(revision)
where did i get the formatting wrong?
I'm working on a concourse pipeline and I need to duplicate a lot of code in my YAML so I'm trying to refactor it so it is easily maintainable and I don't end up with thousands of duplicates lines/blocks.
I have achieve the following yaml file after what seems to be the way to go but it doesn't fullfill all my needs.
add-rotm-points: &add-rotm-points
task: add-rotm-points
config:
platform: linux
image_resource:
type: docker-image
source:
repository: ((registre))/polygone/concourse/cf-cli-python3
tag: 0.0.1
insecure_registries: [ ((registre)) ]
run:
path: source-pipeline/commun/rotm/trigger-rotm.sh
args: [ "source-pipeline", "source-code-x" ]
inputs:
- name: source-pipeline
- name: source-code-x
jobs:
- name: test-a
plan:
- in_parallel:
- get: source-pipeline
- get: source-code-a
trigger: true
- <<: *add-rotm-points
- name: test-b
plan:
- in_parallel:
- get: source-pipeline
- get: source-code-b
trigger: true
- <<: *add-rotm-points
My problem is that both my jobs uses the generic task defined at the top. But in the generic task I need to change source-code-x to the -a or -b version I use in my jobs.
I cannot find a way to achieve this without duplicating my anchor in every jobs and that seems to be counter productive. But i may not have full understood yaml anchors/merges.
All you need to do is map inputs on individual tasks, like this:
add-rotm-points: &add-rotm-points
task: add-rotm-points
config:
platform: linux
image_resource:
type: docker-image
source:
repository: ((registre))/polygone/concourse/cf-cli-python3
tag: 0.0.1
insecure_registries: [ ((registre)) ]
run:
path: source-pipeline/commun/rotm/trigger-rotm.sh
args: [ "source-pipeline", "source-code-x" ]
inputs:
- name: source-pipeline
- name: source-code-x
jobs:
- name: test-a
plan:
- in_parallel:
- get: source-pipeline
- get: source-code-a
trigger: true
- <<: *add-rotm-points
input_mapping:
source-code-x: source-code-a
- name: test-b
plan:
- in_parallel:
- get: source-pipeline
- get: source-code-b
trigger: true
- <<: *add-rotm-points
input_mapping:
source-code-x: source-code-b
See Example Three in this blog: https://blog.concourse-ci.org/introduction-to-task-inputs-and-outputs/
I am trying to setup variables from predefined variables through bash script using bash script. but could not get succedeed. below is my task in azure pipeline
resources:
pipelines:
- pipeline: pipeline1
project: appcom
source: pipeline-api
trigger:
branches:
- develop
- feat/*
- pipeline: pipeline2
project: appcom
source: pipeline2-api
trigger:
branches:
- develop
- feat/*
variables:
- name: alias
value: $(resources.triggeringAlias)
stages:
- stage: ScanImage
jobs:
- job: ScanImage
pool:
vmImage: 'ubuntu-16.04'
steps:
- script: echo $(alias)
- task: Bash#3
inputs:
targetType: 'inline'
script: |
if [ "$(alias)" == "pipeline1" ]; then
echo ("##vso[task.setvariable variable=apiname]$(resources.pipeline.pipeline1.pipelineName)")
echo ("##vso[task.setvariable variable=dockertag]$(resources.pipeline.pipeline1.sourceCommit) | cut -c -7")
echo ("##vso[task.setvariable variable=helmpath]P02565Mallorca/pipeline1-api")
elif [ "$(alias)" == "pipeline2" ]; then
echo ("##vso[task.setvariable variable=apiname]$(resources.pipeline.pipeline2.pipelineName)")
echo ("##vso[task.setvariable variable=dockertag]$(resources.pipeline.pipeline2.sourceCommit) | cut -c -7")
echo ("##vso[task.setvariable variable=helmpath]P02565Mallorca/pipeline2")
fi
- script: echo $(dockertag)
- script: echo $(helmpath)
- script: echo $(apiname)
it giving me error with ##[error]Bash exited with code '2
By reference to this doc: Set variables in scripts, below yaml should work as expected.
resources:
pipelines:
- pipeline: pipeline1
project: appcom
source: pipeline-api
trigger:
branches:
- develop
- feat/*
- pipeline: pipeline2
project: appcom
source: pipeline2-api
trigger:
branches:
- develop
- feat/*
variables:
- name: alias
value: $(resources.triggeringAlias)
stages:
- stage: ScanImage
jobs:
- job: ScanImage
pool:
vmImage: 'ubuntu-16.04'
steps:
- script: echo $(alias)
- task: Bash#3
inputs:
targetType: 'inline'
script: |
if [ "$(alias)" == "pipeline1" ]; then
echo "##vso[task.setvariable variable=apiname]$(resources.pipeline.pipeline1.pipelineName)"
echo "##vso[task.setvariable variable=dockertag]$(resources.pipeline.pipeline1.sourceCommit) | cut -c -7"
echo "##vso[task.setvariable variable=helmpath]P02565Mallorca/pipeline1-api"
elif [ "$(alias)" = "pipeline2" ]; then
echo "##vso[task.setvariable variable=apiname]$(resources.pipeline.pipeline2.pipelineName)"
echo "##vso[task.setvariable variable=dockertag]$(resources.pipeline.pipeline2.sourceCommit) | cut -c -7"
echo "##vso[task.setvariable variable=helmpath]P02565Mallorca/pipeline2-api"
fi
- script: echo $(dockertag)
- script: echo $(helmpath)
- script: echo $(apiname)
See:Resources: pipelines for more details.
Using Github action to run Cypress e2e tests but when tests fail the job still passes.
name: E2E
on:
push:
branches: [ master ]
paths-ignore: [ '**.md' ]
schedule:
- cron: '0 8-20 * * *'
jobs:
cypress-run:
runs-on: ubuntu-16.04
steps:
- name: Checkout
uses: actions/checkout#v1
- name: Cypress run
uses: cypress-io/github-action#v2
continue-on-error: false
with:
record: true
env:
CYPRESS_RECORD_KEY: ${{ secrets.CYPRESS_RECORD_KEY }}
Reason I would like job to fail is to get notified either via Github job failing or with a slack notification like this
- uses: 8398a7/action-slack#v3
if: job.status == 'failure'
with:
status: ${{ job.status }}
fields: repo
channel: '#dev'
mention: here
text: "E2E tests failed"
env:
SLACK_WEBHOOK_URL: ${{ secrets.SLACK_WEBHOOK_URL }}
Have you tried adding an if: ${{ failure() }} at the end?
I ended up creating my own curl request which notifies Slack when anything fails on a specific job. Adding it to the end worked for me.
name: E2E
on:
push:
branches: [ master ]
paths-ignore: [ '**.md' ]
schedule:
- cron: '0 8-20 * * *'
jobs:
cypress-run:
runs-on: ubuntu-16.04
steps:
- name: Checkout
uses: actions/checkout#v1
- name: Cypress run
uses: cypress-io/github-action#v2
continue-on-error: false
with:
record: true
env:
CYPRESS_RECORD_KEY: ${{ secrets.CYPRESS_RECORD_KEY }}
+
+ - name: Notify Test Failed
+ if: ${{ failure() }}
+ run: |
+ curl -X POST -H "Content-type:application/json" --data "{\"type\":\"mrkdwn\",\"text\":\".\n❌ *Cypress Run Failed*:\n*Branch:* $GITHUB_REF\n*Repo:* $GITHUB_REPOSITORY\n*SHA1:* $GITHUB_SHA\n*User:* $GITHUB_ACTOR\n.\n\"}" ${{ secrets.SLACK_WEBHOOK }}
I am trying to configure a YAML file in this format:
jobs:
- name: A
- schedule: "0 0/5 * 1/1 * ? *"
- type: mongodb.cluster
- config:
- host: mongodb://localhost:27017/admin?replicaSet=rs
- minSecondaries: 2
- minOplogHours: 100
- maxSecondaryDelay: 120
- name: B
- schedule: "0 0/5 * 1/1 * ? *"
- type: mongodb.cluster
- config:
- host: mongodb://localhost:27017/admin?replicaSet=rs
- minSecondaries: 2
- minOplogHours: 100
- maxSecondaryDelay: 120
The idea is that I can read the contents inside the job element, and have a series of different job configs which can be parsed.
however, yamllint.com tells me that this is illegal YAML due to mapping values are not allowed in this context at line 2 where line 2 is the jobs: line.
What am I doing wrong?
This is valid YAML:
jobs:
- name: A
schedule: "0 0/5 * 1/1 * ? *"
type: mongodb.cluster
config:
host: mongodb://localhost:27017/admin?replicaSet=rs
minSecondaries: 2
minOplogHours: 100
maxSecondaryDelay: 120
- name: B
schedule: "0 0/5 * 1/1 * ? *"
type: mongodb.cluster
config:
host: mongodb://localhost:27017/admin?replicaSet=rs
minSecondaries: 2
minOplogHours: 100
maxSecondaryDelay: 120
Note, that every '-' starts new element in the sequence. Also, indentation of keys in the map should be exactly same.
The elements of a sequence need to be indented at the same level. Assuming you want two jobs (A and B) each with an ordered list of key value pairs, you should use:
jobs:
- - name: A
- schedule: "0 0/5 * 1/1 * ? *"
- - type: mongodb.cluster
- config:
- host: mongodb://localhost:27017/admin?replicaSet=rs
- minSecondaries: 2
- minOplogHours: 100
- maxSecondaryDelay: 120
- - name: B
- schedule: "0 0/5 * 1/1 * ? *"
- - type: mongodb.cluster
- config:
- host: mongodb://localhost:27017/admin?replicaSet=rs
- minSecondaries: 2
- minOplogHours: 100
- maxSecondaryDelay: 120
Converting the sequences of (single entry) mappings to a mapping as #Tsyvarrev does is also possible, but makes you lose the ordering.