I cannot seem to get my config.yml file too work on circleci - continuous-integration

I'm new to circleci and i'm trying to schedule a run of a cypress test once every 24 hours, I have this config.yml file in my github repo which should work (i think) but i'm getting "config.yml is not valid" but i'm not sure where i should start debugging:
version: 2.1
orbs:
cypress: cypress-io/cypress#1
'on':
schedule:
- cron: 0 1-23 * * *
- cron: 0 0 * * *
workflows:
version: 2
release: null
jobs: null
test_schedule:
name: Test schedule
runs-on: ubuntu-latest
steps:
- name: Skip this step every 24 hours
if: github.event_name == 'schedule' && github.event.schedule != '0 0 * * *'
run: echo "This step will be skipped every 24 hours"
- test: null
triggers:
- schedule:
cron: 0 0 * * *
Any pointers where i'm going wrong? Thanks,
Error message on CircleCI shows:
Value does not match schema: {:workflows {:release {:jobs (not (map? nil))}}}
But from what I can see it's the correct order?

Related

how do I pipe in file content as args to kubectl?

I wish to run k6 in a container with some simple javascript load from local file system,
It seems the below had some syntax error
$ cat simple.js
import http from 'k6/http';
import { sleep } from 'k6';
export const options = {
vus: 10,
duration: '30s',
};
export default function () {
http.get('http://100.96.1.79:8080');
sleep(1);
}
$kubectl run k6 --image=grafana/k6 -- run - <simple.js
//OR
$kubectl run k6 --image=grafana/k6 run - <simple.js
in the k6 pod log, I got
│ time="2023-02-16T12:12:05Z" level=error msg="could not initialize '-': could not load JS test 'file:///-': no exported functions in s │
I guess this means the simple.js is not really passed to k6 this way?
thank you!
I think you can't pipe (host) files into Kubernetes containers this way.
One way that it should work is to:
Create a ConfigMap to represent your file
Apply a Pod config that mounts the ConfigMap file
NAMESPACE="..." # Or default
kubectl create configmap simple \
--from-file=${PWD}/simple.js \
--namespace=${NAMESPACE}
kubectl get configmap/simple \
--output=yaml \
--namespace=${NAMESPACE}
Yields:
apiVersion: v1
kind: ConfigMap
metadata:
name: simple
data:
simple.js: |
import http from 'k6/http';
import { sleep } from 'k6';
export default function () {
http.get('http://test.k6.io');
sleep(1);
}
NOTE You could just create e.g. configmap.yaml with the above YAML content and apply it.
Then with pod.yaml:
apiVersion: v1
kind: Pod
metadata:
name: simple
spec:
containers:
- name: simple
image: docker.io/grafana/k6
args:
- run
- /m/simple.js
volumeMounts:
- name: simple
mountPath: /m
volumes:
- name: simple
configMap:
name: simple
Apply it:
kubectl apply \
--filename=${PWD}/pod.yaml \
--namespace=${NAMESPACE}
Then, finally:
kubectl logs pod/simple \
--namespace=${NAMESPACE}
Yields:
/\ |‾‾| /‾‾/ /‾‾/
/\ / \ | |/ / / /
/ \/ \ | ( / ‾‾\
/ \ | |\ \ | (‾) |
/ __________ \ |__| \__\ \_____/ .io
execution: local
script: /m/simple.js
output: -
scenarios: (100.00%) 1 scenario, 1 max VUs, 10m30s max duration (incl. graceful stop):
* default: 1 iterations for each of 1 VUs (maxDuration: 10m0s, gracefulStop: 30s)
running (00m01.0s), 1/1 VUs, 0 complete and 0 interrupted iterations
default [ 0% ] 1 VUs 00m01.0s/10m0s 0/1 iters, 1 per VU
running (00m01.4s), 0/1 VUs, 1 complete and 0 interrupted iterations
default ✓ [ 100% ] 1 VUs 00m01.4s/10m0s 1/1 iters, 1 per VU
data_received..................: 17 kB 12 kB/s
data_sent......................: 542 B 378 B/s
http_req_blocked...............: avg=128.38ms min=81.34ms med=128.38ms max=175.42ms p(90)=166.01ms p(95)=170.72ms
http_req_connecting............: avg=83.12ms min=79.98ms med=83.12ms max=86.27ms p(90)=85.64ms p(95)=85.95ms
http_req_duration..............: avg=88.61ms min=81.28ms med=88.61ms max=95.94ms p(90)=94.47ms p(95)=95.2ms
{ expected_response:true }...: avg=88.61ms min=81.28ms med=88.61ms max=95.94ms p(90)=94.47ms p(95)=95.2ms
http_req_failed................: 0.00% ✓ 0 ✗ 2
http_req_receiving.............: avg=102.59µs min=67.99µs med=102.59µs max=137.19µs p(90)=130.27µs p(95)=133.73µs
http_req_sending...............: avg=67.76µs min=40.46µs med=67.76µs max=95.05µs p(90)=89.6µs p(95)=92.32µs
http_req_tls_handshaking.......: avg=44.54ms min=0s med=44.54ms max=89.08ms p(90)=80.17ms p(95)=84.62ms
http_req_waiting...............: avg=88.44ms min=81.05ms med=88.44ms max=95.83ms p(90)=94.35ms p(95)=95.09ms
http_reqs......................: 2 1.394078/s
iteration_duration.............: avg=1.43s min=1.43s med=1.43s max=1.43s p(90)=1.43s p(95)=1.43s
iterations.....................: 1 0.697039/s
vus............................: 1 min=1 max=1
vus_max........................: 1 min=1 max=1
Tidy:
kubectl delete \
--filename=${PWD}/pod.yaml \
--namespace=${NAMESPACE}
kubectl delete configmap/simple \
--namespace=${NAMESPACE}
kubectl delete namespace/${NAMESPACE}

GitHub Actions - Set two output names from custom action in Golang code

I have written a custom action in Golang, which has a function that makes two output names.
The output names are created for later use in the workflow, so I can call the action in one job ("Calculate version"), catch the output names, and use it in a different job ("Build")
The Go function inside the action to create the output name is:
func SetTag(value string){
fmt.Printf(`::set-output name=repo_tag::%s`, value)
fmt.Printf(`::set-output name=ecr_tag::%s`, "v" + value)
}
And the workflow that uses the said output is like so:
Version:
name: Calculate Version
runs-on: ubuntu-latest
outputs:
repo_version_env: ${{ steps.version.outputs.repo_tag }}
ecr_version_env: ${{ steps.version.outputs.ecr_tag }}
steps:
- name: Check out code
uses: actions/checkout#v2
- name: Calculate version
id: version
uses: my_custom_action/version#v1
Build:
name: Build Image
runs-on: ubuntu-latest
steps:
- name: Build
run: |
echo ${{ needs.Version.outputs.ecr_version_env }}
echo ${{ needs.Version.outputs.repo_version_env }}
When I run the workflow, The echo commands give me the following in the action build output:
1 echo
2 echo v1.4.1::set-output name=ecr_tag::1.4.1
Instead of:
1 echo v1.4.1
2 echo 1.4.1
What am I doing wrong?
If there are better ways to use the variables outputted from the action, between different jobs in the workflow I will be happy to hear as well.
Thanks a lot.
For what I tested here, the problem is that the go function will print the datas like this:
::set-output name=repo_tag::1.4.1::set-output name=ecr_tag::v1.4.1%
And the problem seems related to the fact that you didn't break the line.
It should work by updating the Go function to:
func SetTag(value string){
fmt.Printf(`::set-output name=repo_tag::%s`, value)
fmt.Print("\n")
fmt.Printf(`::set-output name=ecr_tag::%s`, "v" + value)
fmt.Print("\n")
}
Note that the second fmt.Print("\n") will remove the % symbol on the terminal.
If you want to check the test I made:
workflow file
golang script
workflow run
The output is the one you expect:

Cron jobs to run an ansible playbook every hour

I have been trying to configure a cron job to run my ansible playbook every hour. I could not find any relevant examples on how to start the configuration . I have tried the below task in a separate ansible playbook and it shows the below output for crontab -l, but not execution seems to be happening. Help will be high appreciated.
root#ubuntu:/etc/ansible/# crontab -l
57 10 * * * ansible-playbook crontest.yaml
- name: Run an ansible playbook"
cron:
name: "Run play"
minute: "57"
hour: "10"
job: "ansible-playbook crontest.yaml"
Cron Logs:
Nov 24 07:35:01 ABC30VDEF290021 CRON[13951]: (root) CMD (echo "testing" > /etc/ansible/automation/logs/test.txt)
Nov 24 07:35:01 ABC30VDEF290021 CRON[13948]: (root) CMD ( /usr/bin/ansible-playbook /etc/ansible/automation/crontest.yml)
Nov 24 07:35:01 ABC30VDEF290021 CRON[13949]: (root) CMD (/usr/local/bin/ansible-playbook /etc/ansible/automation/main.yaml)
Crontab -e
#Ansible: Run play
*/1 * * * * /bin/sh -c '. ~/.profile; /usr/local/bin/ansible-playbook /etc/ansible/automation/crontest.yml
*/1 * * * * echo "testing" > /etc/ansible/automation/logs/test.txt
35 7 * * * /usr/local/bin/ansible-playbook /etc/ansible/automation/main.yaml
The following code will run the script every 30th minute of every hour
- name: Run CRON job to load data at every 30th minute of every hour.
become: yes
become_method: sudo
cron:
name: "load_data"
user: "root"
weekday: "*"
minute: "30"
hour: "*"
job: "python3 /path/to/my/script/loadScript.py > /home/ec2-user/loadData_result 2>&1"
state: present

Update property of specific item in array using yq 4

I'm using yq 4.3.1 to update the version field in this yaml:
jobs:
my-job:
steps:
- name: Step 1
id: step1
uses: actions/step1
- name: Step 2
id: step2
uses: actions/step2
with:
version: 1.2.3
But I can't figure out how to select the array item based on the id == 'step2' property so that I can update the version?
Why is it you always figure out the answer the second after you post a question on stackoverflow?
yq eval '(.jobs.my-job.steps[] | select(has("id")) | select(.id == "step2")).with.version = "1.2.4"' -i my.yaml
EDIT
Wow, how wrong was I... :D Updated with a working version

CloudBuild jobs not executed in parallel

I am learning CloudBuild and understand that I can use waitFor to influence the order in which my build runs. job1 includes some sleep time to simulate a long running job. job2 just echos something. done waits for job1 & job2. So I created a test build like this: I have a package.json
{
"scripts": {
"job1": "echo \"[job1] Starting\" && sleep 5 && echo \"[job1] ...\" && sleep 2 && echo \"[job1] Done\" && exit 0",
"job2": "echo \"[job2] Hello from NPM\" && exit 0",
"done": "echo \"DONE DONE DONE!\" && exit 0"
},
}
Job 1 simulates a long running job, where I was hopping job 2 will execute in parallel. But seems like the output shows its not. Does CloudBuild run 1 step at a time only?
cloudbuild.yaml
steps:
- name: 'gcr.io/cloud-builders/npm'
args: ['run', 'job1']
id: 'job1'
- name: 'gcr.io/cloud-builders/npm'
args: ['run', 'job2']
id: 'job2'
- name: 'gcr.io/cloud-builders/npm'
args: ['run', 'done']
waitFor: ['job1', 'job2']
Output
Operation completed over 1 objects/634.0 B.
BUILD
Starting Step #0 - "job1"
Step #0 - "job1": Already have image (with digest): gcr.io/cloud-builders/npm
Step #0 - "job1":
Step #0 - "job1": > learn-gcp#1.0.0 job1 /workspace
Step #0 - "job1": > echo "[job1] Starting" && sleep 5 && echo "[job1] ..." && sleep 2 && echo "[job1] Done" && exit 0
Step #0 - "job1":
Step #0 - "job1": [job1] Starting
Step #0 - "job1": [job1] ...
Step #0 - "job1": [job1] Done
Finished Step #0 - "job1"
Starting Step #1 - "job2"
Step #1 - "job2": Already have image (with digest): gcr.io/cloud-builders/npm
Step #1 - "job2":
Step #1 - "job2": > learn-gcp#1.0.0 job2 /workspace
Step #1 - "job2": > echo "[job2] Hello from NPM" && exit 0
Step #1 - "job2":
Step #1 - "job2": [job2] Hello from NPM
Finished Step #1 - "job2"
Starting Step #2
Step #2: Already have image (with digest): gcr.io/cloud-builders/npm
Step #2:
Step #2: > learn-gcp#1.0.0 done /workspace
Step #2: > echo "DONE DONE DONE!" && exit 0
Step #2:
Step #2: DONE DONE DONE!
Finished Step #2
PUSH
DONE
If you wanted the job2 to be executed concurrently with job1 you should have added the line
waitFor: ['-'] in your cloudbuild.yaml, immediatly after job2. As it is stated in the official documentation:
If no values are provided for waitFor, the build step waits for all prior build steps in the build request to complete successfully before running.
To run a build step immediately at build time, use - in the waitFor
field.
The order of the build steps in the steps field relates to the order
in which the steps are executed. Steps will run serially or
concurrently based on the dependencies defined in their waitFor
fields.
In the documentation is, also an example on how to run two jobs in parallel.
In case you want job2 to run allong with job1 you should have something like this:
steps:
- name: 'gcr.io/cloud-builders/npm'
args: ['run', 'job1']
id: 'job1'
- name: 'gcr.io/cloud-builders/npm'
args: ['run', 'job2']
id: 'job2'
waitFor: ['-'] # The '-' indicates that this step begins immediately.
- name: 'gcr.io/cloud-builders/npm'
args: ['run', 'done']
waitFor: ['job1', 'job2']
Also note that the maximum number of concurent builds you can use is ten.

Resources