YAML mapping values are not allowed in this context - yaml

I am trying to configure a YAML file in this format:
jobs:
- name: A
- schedule: "0 0/5 * 1/1 * ? *"
- type: mongodb.cluster
- config:
- host: mongodb://localhost:27017/admin?replicaSet=rs
- minSecondaries: 2
- minOplogHours: 100
- maxSecondaryDelay: 120
- name: B
- schedule: "0 0/5 * 1/1 * ? *"
- type: mongodb.cluster
- config:
- host: mongodb://localhost:27017/admin?replicaSet=rs
- minSecondaries: 2
- minOplogHours: 100
- maxSecondaryDelay: 120
The idea is that I can read the contents inside the job element, and have a series of different job configs which can be parsed.
however, yamllint.com tells me that this is illegal YAML due to mapping values are not allowed in this context at line 2 where line 2 is the jobs: line.
What am I doing wrong?

This is valid YAML:
jobs:
- name: A
schedule: "0 0/5 * 1/1 * ? *"
type: mongodb.cluster
config:
host: mongodb://localhost:27017/admin?replicaSet=rs
minSecondaries: 2
minOplogHours: 100
maxSecondaryDelay: 120
- name: B
schedule: "0 0/5 * 1/1 * ? *"
type: mongodb.cluster
config:
host: mongodb://localhost:27017/admin?replicaSet=rs
minSecondaries: 2
minOplogHours: 100
maxSecondaryDelay: 120
Note, that every '-' starts new element in the sequence. Also, indentation of keys in the map should be exactly same.

The elements of a sequence need to be indented at the same level. Assuming you want two jobs (A and B) each with an ordered list of key value pairs, you should use:
jobs:
- - name: A
- schedule: "0 0/5 * 1/1 * ? *"
- - type: mongodb.cluster
- config:
- host: mongodb://localhost:27017/admin?replicaSet=rs
- minSecondaries: 2
- minOplogHours: 100
- maxSecondaryDelay: 120
- - name: B
- schedule: "0 0/5 * 1/1 * ? *"
- - type: mongodb.cluster
- config:
- host: mongodb://localhost:27017/admin?replicaSet=rs
- minSecondaries: 2
- minOplogHours: 100
- maxSecondaryDelay: 120
Converting the sequences of (single entry) mappings to a mapping as #Tsyvarrev does is also possible, but makes you lose the ordering.

Related

YACE exporter for CloudWatch Metrics does not work

I want to export the CloudWatch Metrics for AWS Lambda functions and hence configured the YACE exporter following this link
Configured this as a cronJob as shown below. I end up with error about Missing region even though I have specified region.
Here's the cronjob:
apiVersion: v1
kind: Namespace
metadata:
name: tools
---
apiVersion: batch/v1
kind: CronJob
metadata:
name: cloudwatch-metrics-exporter
labels:
app: cloudwatch-exporter
spec:
schedule: "* * * * *"
jobTemplate:
spec:
template:
metadata:
labels:
app: cloudwatch-exporter
spec:
volumes:
- configMap:
defaultMode: 420
name: yace-lambda-config
name: yace-lambda-config
- secret:
defaultMode: 420
secretName: cloudwatch-metrics-exporter-secrets
name: cloudwatch-credentials
containers:
- name: yace
image: quay.io/invisionag/yet-another-cloudwatch-exporter:v0.16.0-alpha
imagePullPolicy: IfNotPresent
ports:
- containerPort: 5000
volumeMounts:
- name: yace-lambda-config
mountPath: /tmp/config.yml
subPath: config.yml
resources:
limits:
memory: "128Mi"
cpu: "500m"
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
securityContext:
capabilities:
drop:
- ALL
privileged: false
runAsUser: 1000
runAsNonRoot: true
readOnlyRootFilesystem: false
allowPrivilegeEscalation: false
terminationGracePeriodSeconds: 30
dnsPolicy: ClusterFirst
automountServiceAccountToken: true
shareProcessNamespace: false
securityContext:
runAsNonRoot: false
seccompProfile:
type: RuntimeDefault
restartPolicy: OnFailure
---
apiVersion: v1
kind: ConfigMap
metadata:
name: yace-lambda-config
namespace: tools
data:
config.yml: |
discovery:
jobs:
- type: lambda
regions:
- eu-central-1
enableMetricData: true
metrics:
- name: Duration
statistics: [ Sum, Maximum, Minimum, Average ]
period: 300
length: 3600
- name: Invocations
statistics: [ Sum ]
period: 300
length: 3600
- name: Errors
statistics: [ Sum ]
period: 300
length: 3600
- name: Throttles
statistics: [ Sum ]
period: 300
length: 3600
- name: DeadLetterErrors
statistics: [ Sum ]
period: 300
length: 3600
- name: DestinationDeliveryFailures
statistics: [ Sum ]
period: 300
length: 3600
- name: ProvisionedConcurrencyInvocations
statistics: [ Sum ]
period: 300
length: 3600
- name: ProvisionedConcurrencySpilloverInvocations
statistics: [ Sum ]
period: 300
length: 3600
- name: IteratorAge
statistics: [ Average, Maximum ]
period: 300
length: 3600
- name: ConcurrentExecutions
statistics: [ Sum ]
period: 300
length: 3600
- name: ProvisionedConcurrentExecutions
statistics: [ Sum ]
period: 300
length: 3600
- name: ProvisionedConcurrencyUtilization
statistics:
- Maximum
period: 300
length: 3600
- name: UnreservedConcurrentExecutions
statistics: [ Sum ]
period: 300
length: 3600
Error:
2022/12/16 08:54:28 ERROR: unable to resolve endpoint for service "tagging", region "", err: UnknownEndpointError: could not resolve endpoint
partition: "aws", service: "tagging", region: "", known: [us-east-2 us-west-1 ap-northeast-1 ap-southeast-1 eu-west-1 eu-west-2 ap-southeast-2 eu-west-3 us-west-2 sa-east-1 us-east-1 ap-east-1 ap-northeast-2 ca-central-1 me-south-1 ap-south-1 eu-central-1 eu-north-1]
What am I doing wrong?
Update: Turns out it was indentation problem with my yaml. I resolved that and now have a different issue
Here's the error:
{"level":"warning","msg":"NoCredentialProviders: no valid providers in chain. Deprecated.\n\tFor verbose messaging see aws.Config.CredentialsChainVerboseErrors","time":"2022-12-16T16:35:47Z"}
{"level":"info","msg":"Couldn't describe resources for region eu-central-1: NoCredentialProviders: no valid providers in chain. Deprecated.\n\tFor verbose messaging see aws.Config.CredentialsChainVerboseErrors\n","time":"2022-12-16T16:35:47Z"}
I have provided AWS credentials via ENV variables by doing an export of necessary AWS credentials and then did kubectl apply config.yaml

How to replace a cron in a shell script using sed?

I need to replace a cron entry in a file using sed or awk.
tried this : didnt work
sed -i 's/0 0 * * 0/0 1 * * 1/g' script.sh
script.sh
#!/bin/bash
mkdir -p .github/workflows
cd .github/workflows
touch semgrep.yml
cat << EOF > semgrep.yml
name: Semgrep
on:
pull_request: {}
push:
branches:
- master
- main
paths:
- .github/workflows/semgrep.yml
schedule:
- cron: '0 0 * * 0'
jobs:
semgrep:
name: Static Analysis Scan
runs-on: ubuntu-18-04
Kindly help me with the same .
Using mikefarah/yq to edit the file in place (-i):
yq -i '.on.schedule[].cron = "0 1 * * 1"' semgrep.yml
would turn a semgrep.yml containing
name: Semgrep
on:
pull_request: {}
push:
branches:
- master
- main
paths:
- .github/workflows/semgrep.yml
schedule:
- cron: '0 0 * * 0'
jobs:
semgrep:
name: Static Analysis Scan
runs-on: ubuntu-18-04
into one containing
name: Semgrep
on:
pull_request: {}
push:
branches:
- master
- main
paths:
- .github/workflows/semgrep.yml
schedule:
- cron: '0 1 * * 1'
jobs:
semgrep:
name: Static Analysis Scan
runs-on: ubuntu-18-04

bash script to edit yaml usimg awk [duplicate]

This question already has an answer here:
Need help on bask awk to update Yaml file by finding the pattern in the file
(1 answer)
Closed 1 year ago.
I am new to bash.. looking some advise on one issue mentioned below.
I have config file below.
impulse.yaml
- job_name: orch
value: CPST
- group: indalco
value1: wr
- monitor:
- name: quid
cnt: 2
- name: kwid
cnt: 3
- name: knid
cnt: 4
- interval: 3m
- static_configs:
- targets: targets1
labels:
group: BSA
gid: geo
dc: lba
if i run the bash script, it should update like below
need to be updated impulse.yaml
- job_name: orch
value: CPST
- group: indalco
value1: wr
- monitor:
- name: quid
cnt: 2
- name: kwid
cnt: 3
- name: knid
cnt: 4
- name: orch_vm1
- name: orch_vm2
- interval: 3m
- static_configs:
- targets: targets1
labels:
group: BSA
gid: geo
dc: lba
------------------bash script---------
getline() {
awk '
BEGIN { ln=1; find_monitor=0; }
(find_monitor==1 && $0~/^[a-z]/) { exit }
($0~/^monitor:/) { find_monitor = 1 ;ln = NR }
END { print ln }' ${1}
}
word="monitor" # no use of this variable
echo $line
filename="impulse.yaml"
for vm_name in orch_vm1 orch_vm2;
do
line=`getline $filename $word`
sed -i -e ${line}"a\ - name: \"${vm_name}\" " $filename
the code right now is updating at the begning of the monitor section of yaml file like below..but it needs to be updated at the end of the monitor section before interval section. Please advise what pattern matching technic can be applied.
- job_name: orch
value: CPST
- group: indalco
value1: wr
- monitor:
- name: orch_vm1
- name: orch_vm2
- name: quid
cnt: 2
- name: kwid
cnt: 3
- name: knid
cnt: 4
- interval: 3m
- static_configs:
- targets: targets1
labels:
group: BSA
gid: geo
dc: lba
I agree with #LéaGris’ comment. Structured data like YAML should be interpreted via its defined syntax. Traditional command line tools can't do this. yq is the closest analogue

Prometheus: How to disable 1 rule for 1 specific job_name?

I'm setting prometheus alert (using elasticsearch_exporter) for 2 elasticsearch clusters, 1 with 8 nodes and 1 with 3 node.
What I want is to send alert when each cluster lost 1 node, but for now all rules apply for both clusters. So it's not possible.
prometheus.yml file
global:
scrape_interval: 10s
rule_files:
- alert.rules.yml
alerting:
alertmanagers:
- static_configs:
- targets:
- localhost:9093
scrape_configs:
- job_name: cluster1
scrape_interval: 30s
scrape_timeout: 30s
metrics_path: "/metrics"
static_configs:
- targets: ['xxx1:9114' ]
labels:
service: cluster1
- job_name: cluster2
scrape_interval: 30s
scrape_timeout: 30s
metrics_path: "/metrics"
static_configs:
- targets: ['xxx2:9114' ]
labels:
service: cluster2
alert.rules.yml file:
groups:
- name: alert.rules
rules:
- alert: ElasticsearchLostNode
expr: elasticsearch_cluster_health_number_of_nodes < 8
for: 1m
labels:
severity: warning
annotations:
summary: Elasticsearch Healthy Nodes (instance {{ $labels.instance }})
description: Number Healthy Nodes less than 8
...
Ofc the number_of_nodes < 8 will always be true for small cluster, and if I set < 3, the alert will not triggered when big cluster lost 1 node.
Is there a way to exempt 1 specific rule for 1 specific job_name, or define these rules A applying for 1 specific job_name A, these rules B applying for 1 specific job_name B?
Yes, you can create one rule for each job at the alert.rules.yml file:
groups:
- name: alert.rules
rules:
- alert: ElasticsearchLostNode1
expr: elasticsearch_cluster_health_number_of_nodes{job="cluster1"} < 8
...
- alert: ElasticsearchLostNode2
expr: elasticsearch_cluster_health_number_of_nodes{job="cluster2"} < 3
...

How can I act on the last created node in my jelastic installation manifest?

I have the following jps manifest:
jpsVersion: 1.3
jpsType: update
application:
id: test
name: Test
version: 0.0
onInstall:
- addNodes:
- nodeType: docker
count: 1
fixedCloudlets: 1
cloudlets: 16
dockerName: gitlab/gitlab-runner
onAfterAddNode:
- installDocker
actions:
installDocker:
- cmd:
- myDockerInstallScript.sh
My problem is that the onAfterAddNode actions are not called, even though the node was added successfully. What am I doing wrong? How can I guarantee the commands will be run on the added node only?
EDIT
My use case is the following: I have created an environment a while ago, which I would like to add new nodes to. Therefore, I need to update that environment with the addition of new nodes and with some installation steps on those new nodes.
If you need to perform an action only in a newly created node, then you can do it like this:
jpsVersion: 1.3
jpsType: update
application:
id: test
name: Test
version: 0.0
onInstall:
- addNodes:
nodeType: docker
count: 1
nodeGroup: runner
fixedCloudlets: 1
cloudlets: 16
dockerName: gitlab/gitlab-runner
- installDocker: ${nodes.runner.last.id}
actions:
installDocker:
- cmd [${this}]:
- myDockerInstallScript.sh
Also, thanks for the comment about the documentation, we have updated it: https://docs.cloudscripting.com/creating-manifest/actions/#addnodes
The execution of other events in the environment occurs only after the successful completion of the onInstall event. The onAfterAddNode event will run the next time a node is added. Here you can see the sequence of events. If you just need to call the action during installation, then you need to do this in onInstall:
Example:
jpsVersion: 1.3
jpsType: update
application:
id: test
name: Test
version: 0.0
onInstall:
- addNodes:
- nodeType: docker
count: 1
fixedCloudlets: 1
cloudlets: 16
dockerName: gitlab/gitlab-runner
- installDocker
actions:
installDocker:
- cmd:
- myDockerInstallScript.sh
If it is necessary that a certain action is also performed each time a node is added to the topology of the environment, then you can do this such way:
jpsVersion: 1.3
jpsType: update
application:
id: test
name: Test
version: 0.0
onInstall:
- addNodes:
- nodeType: docker
count: 1
nodeGroup: runner
fixedCloudlets: 1
cloudlets: 16
dockerName: gitlab/gitlab-runner
- installDocker
onAfterAddNode [runner]:
- installDocker
actions:
installDocker:
- cmd [runner]:
- myDockerInstallScript.sh
If you want a specific action to be performed after scaling the entire layer, then you can do it this way:
jpsVersion: 1.3
jpsType: update
application:
id: test
name: Test
version: 0.0
onInstall:
- addNodes:
- nodeType: docker
count: 1
nodeGroup: runner
fixedCloudlets: 1
cloudlets: 16
dockerName: gitlab/gitlab-runner
- installDocker
onAfterScaleOut [runner]:
forEach(event.response.nodes):
installDocker: ${#i.id}
actions:
installDocker:
- cmd [${this}]:
- myDockerInstallScript.sh

Resources