yaml bad indentation of a mapping entry gitlab? - yaml

I am trying to implement a ci/cd pipeline using gitlab and i created a ci file with the following content.
stages:
- deploy
image:
name: "ubuntu:16.04"
first-pipeline:test:
stage: deploy
tags:
- executor:docker
only:
refs:
- branches
- schedules
script:
- export ANSIBLE_HOST_KEY_CHECKING=False
- echo "Job: $job_param"
- ansible-playbook -i production.ini -e "job_id=$job_param ansible_ssh_user=ubuntu" my-playbook.yml -l "10.37.23.230"
- apk add curl
- curl -X POST http://98.121.222.32:8080/api/v2/removejob -H 'Content-Type: application/json' -d "{"jobId": $job_param}"
- echo $query
- echo "Executed at= $now"
I keep running into the following error message : bad indentation of a mapping entry
24 | ...
25 | ... ubuntu" my-playbook.yml -l "10.37.23.230"
26 | ...
27 | ... application/json' -d "{"jobId": $job_param}"
-----------------------------------------^
28 | ...
29 | ...
Any suggestion on how to fix it? Any help will be appreciated. Thank you.

This worked :-
'curl -H "Content-Type: application/json" -X POST http://98.121.222.32:8080/api/v2/removejob -d "{"jobId": $job_param}"'

Related

Bash Extract 1st key of the json

Hi I need to extract the 1st key of the output json I've tried with different regex but didn't give expected results could you please let me to solve this.
LANGUAGES=`curl \
--request GET \
--header 'authorization: Bearer ${{ secrets.GITHUB_TOKEN }}' \
--header 'content-type: application/string' \
--url 'https://api.github.com/repos/${{ github.repository }}/languages' \
`
echo "$LANGUAGES" | regex
outputs and keys will be dynamic
{
"HCL": 56543,
"Shell": 22986,
"Dockerfile": 307
}
Expected output : HCL
{
"Java": 56543,
"C++": 22986,
"C#": 307
}
Expected output : Java
{
"Python": 56543,
"SHELL": 22986,
"C": 307
}
Expected output : Python
echo "$LANGUAGES" | jq -r 'keys_unsorted | first'

GitHub Actions: Passing JSON data to another job

I'm attempting to pass an array of dynamically fetched data from one GitHub Action job to the actual job doing the build. This array will be used as part of a matrix to build for multiple versions. However, I'm encountering an issue when the bash variable storing the array is evaluated.
jobs:
setup:
runs-on: ubuntu-latest
outputs:
versions: ${{ steps.matrix.outputs.value }}
steps:
- id: matrix
run: |
sudo apt-get install -y jq && \
MAINNET=$(curl https://api.mainnet-beta.solana.com -X POST -H "Content-Type: application/json" -d '{"jsonrpc":"2.0","id":1, "method":"getVersion"}' | jq '.result["solana-core"]') && \
TESTNET=$(curl https://api.testnet.solana.com -X POST -H "Content-Type: application/json" -d '{"jsonrpc":"2.0","id":1, "method":"getVersion"}' | jq '.result["solana-core"]') && \
VERSIONS=($MAINNET $TESTNET) && \
echo "${VERSIONS[#]}" && \
VERSION_JSON=$(echo "${VERSIONS[#]}" | jq -s) && \
echo $VERSION_JSON && \
echo '::set-output name=value::$VERSION_JSON'
shell: bash
- id: debug
run: |
echo "Result: ${{ steps.matrix.outputs.value }}"
changes:
needs: setup
runs-on: ubuntu-latest
# Set job outputs to values from filter step
outputs:
core: ${{ steps.filter.outputs.core }}
package: ${{ steps.filter.outputs.package }}
strategy:
matrix:
TEST: [buy, cancel, create_auction_house, delegate, deposit, execute_sale, sell, update_auction_house, withdraw_from_fee, withdraw_from_treasury, withdraw]
SOLANA_VERSION: ${{fromJson(needs.setup.outputs.versions)}}
steps:
- uses: actions/checkout#v2
# For pull requests it's not necessary to checkout the code
- uses: dorny/paths-filter#v2
id: filter
with:
filters: |
core:
- 'core/**'
package:
- 'auction-house/**'
- name: debug
id: debug
working-directory: ./auction-house/program
run: echo ${{ needs.setup.outputs.versions }}
In the setup job above, the two versions are evaluated to a bash array (in VERSIONS) and converted into a JSON array to be passed to the next job (in VERSION_JSON). The last echo in the matrix step results in a print of [ "1.10.31", "1.11.1" ], but the debug step prints out this:
Run echo "Result: "$VERSION_JSON""
echo "Result: "$VERSION_JSON""
shell: /usr/bin/bash -e {0}
env:
CARGO_TERM_COLOR: always
RUST_TOOLCHAIN: stable
Result:
The changes job also results in an error:
Error when evaluating 'strategy' for job 'changes'.
.github/workflows/program-auction-house.yml (Line: 44, Col: 25): Unexpected type of value '$VERSION_JSON', expected type: Sequence.
It definitely seems like the $VERSION_JSON variable isn't actually being evaluated properly, but I can't figure out where the evaluation is going wrong.
For echo '::set-output name=value::$VERSION_JSON' you need to use double quotes or bash would not expand $VERSION_JSON.
set-output is not happy with multi-lined data. For your case, you can use jq -s -c so the output will be one line.

How can I run more complex job dependencies via Rundeck

I'm currently testing Rundeck for some PoCs I'm working on.
I currently only have it running on a single machine via docker, so I don't have access to multiple nodes.
I want to simulate the following job dependencies:
JobA is needed for JobB and JobC. But JobB and JobC must run in parallel.
JobC must run if the current local time is 9:00am. Doesn't matter if Job A is finished/started yet or not.
All this should be able to easily scale and expand up to several hundreds of Jobs.
Can some of you guys possibly help me?
I have tried several configurations including the Plug-in Job State. I somehow can't get this to work. The best I can do is run all the jobs either sequential or parallel.
The best (and easiest) way to achieve that is to use the ruleset strategy (only available on PagerDuty Process Automation On Prem, formerly "Rundeck Enterprise") calling your jobs using the job reference step.
So, on the community version, the "workaround" is to call the jobs from a Parent Job via Rundeck API, which basically is about scripting the custom workflow behavior, e.g:
JobA
- defaultTab: nodes
description: ''
executionEnabled: true
id: b156b3ed-fde6-4fdc-af81-1ee919edc4ff
loglevel: INFO
name: JobA
nodeFilterEditable: false
plugins:
ExecutionLifecycle: null
scheduleEnabled: true
sequence:
commands:
- exec: whoami
keepgoing: false
strategy: node-first
uuid: b156b3ed-fde6-4fdc-af81-1ee919edc4ff
JobB
- defaultTab: nodes
description: ''
executionEnabled: true
id: 4c46c795-20c9-47a8-aa2e-e72f183b150f
loglevel: INFO
name: JobB
nodeFilterEditable: false
plugins:
ExecutionLifecycle: null
scheduleEnabled: true
sequence:
commands:
- exec: whoami
keepgoing: false
strategy: node-first
uuid: 4c46c795-20c9-47a8-aa2e-e72f183b150f
JobC (runs every day at 9.00 AM, no matter the JobA execution).
- defaultTab: nodes
description: ''
executionEnabled: true
id: ce45c4d1-1350-407b-8001-2d90daf49eaa
loglevel: INFO
name: JobC
nodeFilterEditable: false
plugins:
ExecutionLifecycle: null
schedule:
month: '*'
time:
hour: '09'
minute: '00'
seconds: '0'
weekday:
day: '*'
year: '*'
scheduleEnabled: true
sequence:
commands:
- exec: whoami
keepgoing: false
strategy: node-first
uuid: ce45c4d1-1350-407b-8001-2d90daf49eaa
ParentJob (all the behavior logic is inside a bash script, needs jq tool to work)
- defaultTab: nodes
description: ''
executionEnabled: true
id: 6e67020c-7293-4674-8d56-5db811ae5745
loglevel: INFO
name: ParentJob
nodeFilterEditable: false
plugins:
ExecutionLifecycle: null
scheduleEnabled: true
sequence:
commands:
- description: A friendly starting message
exec: echo "starting..."
- description: Execute JobA and Get the Status
fileExtension: .sh
interpreterArgsQuoted: false
plugins:
LogFilter:
- config:
invalidKeyPattern: \s|\$|\{|\}|\\
logData: 'true'
regex: ^(job_a_exec_id)\s*=\s*(.+)$
type: key-value-data
script: "# execute JobA\nexec_id=$(curl -s -X POST \\\n \"http://localhost:4440/api/41/job/b156b3ed-fde6-4fdc-af81-1ee919edc4ff/run\"\
\ \\\n --header \"Accept: application/json\" \\\n --header \"X-Rundeck-Auth-Token:\
\ 45hNfblPiF6C1l9CfJeG37l08oAgs0Gd\" | jq .id)\n \necho \"job_a_exec_id=$exec_id\""
scriptInterpreter: /bin/bash
- description: Time to execute the Job
exec: sleep 2
- description: 'Get the execution Status '
fileExtension: .sh
interpreterArgsQuoted: false
plugins:
LogFilter:
- config:
invalidKeyPattern: \s|\$|\{|\}|\\
logData: 'false'
regex: ^(job_a_exec_status)\s*=\s*(.+)$
type: key-value-data
script: "exec_status=$(curl -s -X GET \\\n 'http://localhost:4440/api/41/execution/#data.job_a_exec_id#'\
\ \\\n --header 'Accept: application/json' \\\n --header 'X-Rundeck-Auth-Token:\
\ 45hNfblPiF6C1l9CfJeG37l08oAgs0Gd' | jq -r .status)\n \necho \"job_a_exec_status=$exec_status\""
scriptInterpreter: /bin/bash
- description: 'Run JobB and JobC pararelly, depending on JobA Execution Status'
fileExtension: .sh
interpreterArgsQuoted: false
script: "# Now execute JobB and JobC depending on the JobA final status (dependency)\n\
\nif [ #data.job_a_exec_status# = \"succeeded\" ]; then\n echo \"Job A execition\
\ is OK, running JobB and JobC pararelly...\"\n \n # JobB \n curl -X POST\
\ \\\n \"http://localhost:4440/api/41/job/4c46c795-20c9-47a8-aa2e-e72f183b150f/run\"\
\ \\\n --header \"Accept: application/json\" \\\n --header \"X-Rundeck-Auth-Token:\
\ 45hNfblPiF6C1l9CfJeG37l08oAgs0Gd\" &\n \n # JobC\n curl -X POST \\\n\
\ \"http://localhost:4440/api/41/job/ce45c4d1-1350-407b-8001-2d90daf49eaa/run\"\
\ \\\n --header \"Accept: application/json\" \\\n --header \"X-Rundeck-Auth-Token:\
\ 45hNfblPiF6C1l9CfJeG37l08oAgs0Gd\" &\nelse\n echo \"Job A has failed, dependencey\
\ failed\"\nfi"
scriptInterpreter: /bin/bash
- description: A friendly ending message
exec: echo "done."
keepgoing: false
strategy: node-first
uuid: 6e67020c-7293-4674-8d56-5db811ae5745
Parent Job's JobB/JobC launching script:
# Now execute JobB and JobC depending on the JobA final status (dependency)
if [ #data.job_a_exec_status# = "succeeded" ]; then
echo "Job A execition is OK, running JobB and JobC pararelly..."
# JobB
curl -X POST \
"http://localhost:4440/api/41/job/4c46c795-20c9-47a8-aa2e-e72f183b150f/run" \
--header "Accept: application/json" \
--header "X-Rundeck-Auth-Token: 45hNfblPiF6C1l9CfJeG37l08oAgs0Gd" &
# JobC
curl -X POST \
"http://localhost:4440/api/41/job/ce45c4d1-1350-407b-8001-2d90daf49eaa/run" \
--header "Accept: application/json" \
--header "X-Rundeck-Auth-Token: 45hNfblPiF6C1l9CfJeG37l08oAgs0Gd" &
else
echo "Job A has failed, dependencey failed"
fi

Jenkins pipeline stage build is green even though an error is present

I have a Jenkins Pipline with three stages: "Build" "Test" and "Deploy".
Here is the problem I have with "Build":
The build step ensures that structure of the Control-M Automation API json files are valid.
To do this, I utilize the $endpoint/build service provided by Automation API in the build step:
stage('Build') {
environment {
CONTROLM_CREDS = credentials('xxx')
ENDPOINT = 'xxx'
}
steps {
sh '''
username=$CONTROLM_CREDS_USR
password=$CONTROLM_CREDS_PSW
# Login
login=$(curl -k -s -H "Content-Type: application/json" -X POST -d \\{\\"username\\":\\"$username\\",\\"password\\":\\"$password\\"\\} "$ENDPOINT/session/login" )
token=$(echo ${login##*token\\" : \\"} | cut -d '"' -f 1)
# Build
curl -k -s -H "Authorization: Bearer $token" -X POST -F "definitionsFile=#ctmjobs/TestCICD.json" "$ENDPOINT/build"
curl -k -s -H "Authorization: Bearer $token" -X POST "$ENDPOINT/session/logout"
'''
}
}
<snip>
Everything works as expected, but if I intentionally put an error in the json file, Jenkins detects it and prints the error in the terminal, but "Build" still goes green. Can anyone identify the error? My expectation is that the stage "Build" goes to red as soon as there is an error in the JSON file.
Here is a Jenkins output from the terminal:
+ password=****
++ curl -k -s -H 'Content-Type: application/json' -X POST -d '{"username":"xxx","password":"****"}' /automation-api/session/login
+ login='{
"username" : "xxx",
"token" : "xxx",
"version" : "9.19.200"
}'
++ echo 'xxx",
' '"version"' : '"9.19.200"
' '}'
++ cut -d '"' -f 1
+ token=xxx
+ curl -k -s -H 'Authorization: Bearer xxx' -X POST -F definitionsFile=#ctmjobs/Test.json /automation-api/build
{
"errors" : [ {
"message" : "unknown type: Job:Dummmy",
"file" : "Test.json",
"line" : 40,
"col" : 29
}, {
"message" : "unknown type: Job:Dummmy",
"file" : "Test.json",
"line" : 63,
"col" : 29
} ]
}+ curl -k -s -H 'Authorization: Bearer xxx' -X POST /automation-api/session/logout
{
"message" : "Successfully logged out from session xxx"
} ``
Jenkins in order to consider a stage as failed, it will check the exit code of a command executed, in your case
curl -k -s -H 'Authorization: Bearer xxx' -X POST -F definitionsFile=#ctmjobs/Test.json /automation-api/build
The issue is that the curl, as a command, is executed successfully.
But the body of the curl indicates that the api call failed.
You could add --fail flag to your curl. This will force curl to return an erroneous exit code when the response status is > 400
(HTTP) Fail silently (no output at all) on server errors. This is
mostly done to enable scripts etc to better deal with failed attempts.
In normal cases when an HTTP server fails to deliver a document, it
returns an HTML document stating so (which often also describes why
and more). This flag will prevent curl from outputting that and return
error 22.
curl --show-error --fail -k -H 'Authorization: Bearer xxx' -X POST -F definitionsFile=#ctmjobs/Test.json /automation-api/build

I'm trying to unseal vault using ansible. But I'm getting connection refused error

It worked a few days back, i even checked similar problems like here
I tried to add the environment variables and everything, my hcl file aslo is not a problem as far as i know
hcl file is
storage "file" {
path = "/home/***/vault/"
}
listener "tcp" {
address = "127.0.0.1:8200"
tls_disable = 1
}
my unseal.yml looks like this
---
- name: Removing login and putting to another file
shell: sed -n '7p' keys.txt > login.txt
- name: Remove all lines other than the keys
shell: sed '6,$d' keys.txt > temp.txt
- name: Extracting the keys
shell: cut -c15- temp.txt > unseal_keys.txt
- name: Deleting unnecessary files
shell: rm temp.txt
- name: Unsealing the vault
environment:
VAULT_ADDR: http://127.0.0.1:8200
shell: vault operator unseal $(awk 'NR==1' unseal_keys.txt)
- name: Unsealing the vault
environment:
VAULT_ADDR: http://127.0.0.1:8200
shell: vault operator unseal $(awk 'NR==2' unseal_keys.txt)
- name: Unsealing the vault
environment:
VAULT_ADDR: http://127.0.0.1:8200
shell: vault operator unseal $(awk 'NR==3' unseal_keys.txt)
register: check
- debug: var=check.stdout_lines
- name: Login
environment:
VAULT_ADDR: http://127.0.0.1:8200
shell: vault login $(sed 's/Initial Root Token://; s/ //' login.txt)
register: checkLogin
- debug: var=checkLogin.stdout_lines
My start-server.yml looks like this
---
#- name: Disable mlock
# shell: sudo setcap cap_ipc_lock=+ep $(readlink -f $(which vault))
# shell: LimitMEMLOCK=infinity
- name: Start vault service
systemd:
state: started
name: vault
daemon_reload: yes
environment:
VAULT_ADDR: http://127.0.0.1:8200
become: true
- pause:
seconds: 15
This the error shown.
fatal: [europa]: FAILED! => {"changed": true, "cmd": "vault operator unseal $(awk 'NR==1' unseal_keys.txt)", "delta": "0:00:00.049258", "end": "2019-09-17 12:25:48.987789", "msg": "non-zero return code", "rc": 2, "start": "2019-09-17 12:25:48.938531", "stderr": "Error unsealing: Put http://127.0.0.1:8200/v1/sys/unseal: dial tcp 127.0.0.1:8200: connect: connection refused", "stderr_lines": ["Error unsealing: Put http://127.0.0.1:8200/v1/sys/unseal: dial tcp 127.0.0.1:8200: connect: connection refused"], "stdout": "", "stdout_lines": []}
This is the main error
"Error unsealing: Put http://127.0.0.1:8200/v1/sys/unseal: dial tcp 127.0.0.1:8200: connect: connection refused", "stderr_lines": ["Error unsealing: Put http://127.0.0.1:8200/v1/sys/unseal: dial tcp 127.0.0.1:8200: connect: connection refused"
"Error unsealing: Put http://127.0.0.1:8200/v1/sys/unseal: dial tcp 127.0.0.1:8200: connect: connection refused"
As it is showing connection refused, most probably your vault service is not running.
Other thing which I can suggest is that you can make a script named unseal_vault.sh and can use that script to unseal your vault instead of repeating same tasks in your playbook.
Below is a script which I use in my setup to unseal vault.
#!/bin/bash
# Assumptions: vault is already initialized
# Fetching first three keys to unseal the vault
KEY_1=$(cat keys.log | grep 'Unseal Key 1' | awk '{print $4}')
KEY_2=$(cat keys.log | grep 'Unseal Key 2' | awk '{print $4}')
KEY_3=$(cat keys.log | grep 'Unseal Key 3' | awk '{print $4}')
# Unseal using first key
curl --silent -X PUT \
http://192.*.*.*:8200/v1/sys/unseal \
-H 'cache-control: no-cache' \
-H 'content-type: application/json' \
-d '{
"key": "'$KEY_1'"
}'
# Unseal using second key
curl --silent -X PUT \
http://192.*.*.*:8200/v1/sys/unseal \
-H 'cache-control: no-cache' \
-H 'content-type: application/json' \
-d '{
"key": "'$KEY_2'"
}'
# Unseal using third key
curl --silent -X PUT \
http://192.*.*.*:8200/v1/sys/unseal \
-H 'cache-control: no-cache' \
-H 'content-type: application/json' \
-d '{
"key": "'$KEY_3'"
}'
And you can run this script using a single task in ansible.

Resources