Bad indentation of a sequence entry bitbucket pipelines - yaml

I currently have a step in bitbucket pipelines which does some stuff. The last step is to start an aws ecs task, like this:
- step:
name: Migrate database
script:
- curl "https://s3.amazonaws.com/aws-cli/awscli-bundle.zip" -o "awscli-bundle.zip"
- apt-get update
- apt-get install -y unzip python
- unzip awscli-bundle.zip
- ./awscli-bundle/install -b ~/bin/aws
- export PATH=~/bin:$PATH
- aws ecs run-task --cluster test-cluster --task-definition test-task --overrides '{ "containerOverrides": [ { "name": "test-container", "command": [ "echo", "hello world" ], "environment": [ { "name": "APP_ENV", "value": "local" } ] } ] }' --network-configuration '{ "awsvpcConfiguration": { "subnets": ["subnet-xxxxxxx"], "securityGroups": ["sg-xxxxxxx"], "assignPublicIp": "ENABLED" }}' --launch-type FARGATE
This fails the validation with the error:
Bad indentation of a sequence entry bitbucket pipelines
Splitting the statement up on multiple lines is not working either. What would be the correct approach here?

The issue is you have a colon followed by a space, which causes the YAML parser to interpret this as a map and not a string.
The easiest solution would be to move
aws ecs run-task --cluster test-cluster --task-definition test-task --overrides '{ "containerOverrides": [ { "name": "test-container", "command": [ "echo", "hello world" ], "environment": [ { "name": "APP_ENV", "value": "local" } ] } ] }' --network-configuration '{ "awsvpcConfiguration": { "subnets": ["subnet-xxxxxxx"], "securityGroups": ["sg-xxxxxxx"], "assignPublicIp": "ENABLED" }}' --launch-type FARGATE
Into a script file, and call it from Pipelines.
You could also remove all the spaces after any ':' characters. But given the amount of JSON there, you'd likely encounter the same issue again when modifying it. So the script file is probably the easier option here.

Related

Appending to a configuration file

I am creating a script which updates hosts in an application, the config file for each host looks like that below. The script generates the hosts correctly but I need to append every } with a comma , except the last host.
I have tried numerous things but the closest I have got is putting the hosts content on a single line and running a IFS statement against it. Im also not sure how best to approach this, can anyone advise?
{
"cmd": "ssh user#webserver",
"inTerminal": "new",
"name": "webserver",
"theme": "basic",
"title": "Webserver",
}
example of what I am trying to achieve
{
"cmd": "ssh user#webserver",
"inTerminal": "new",
"name": "webserver",
"theme": "basic",
"title": "Webserver",
},
{
"cmd": "ssh user#db",
"inTerminal": "new",
"name": "db server",
"theme": "basic",
"title": "db",
},
{
"cmd": "ssh user#mail",
"inTerminal": "new",
"name": "mail server",
"theme": "basic",
"title": "mail server",
}
You can do things like:
#!/bin/sh
for f in $(generate-host-list); do
read -d \000 c < "$f"
list="$list${list+,
}$c"
done
echo "$list"
If you are just writing to a file that can be simpler (no need for the read, just cat the file). Similarly, if you don't care about munging whitespace, you could do list="$list${list+,}$(cat "$f"). If you are using bash or some other shells you can do non-portable things like += to clean it up.
You can do it like this:
sed '$q; s/^}$/},/' <in_file >out_file
The above sed command works as follows: First check if you've reached the last
line, and if so quit. Otherwise, it'll check if the only character on the line
is }, and if so replace it with },.

Using jq in Gitlab CI yaml

I'm having a json file sample.json. Below is the snippet from the sample.json -
{
"AddOnModules": {
"Description": "add on modules",
"Type": "Array",
"AllowedValues": [
"a",
"b",
"c"
],
"value": []
}
}
I'm trying to provide value to AddOnModules through git ci variable (parameter-value) at runtime while running the pipeline. The following is the snippet of the pipeline -
stages:
- deploy
# Job to deploy for development
dev-deploy:
variables:
before_script:
- apk add jq
image: python:3.7.4-alpine3.9
script:
- tmp=$(mktemp)
- jq -r --arg add_on_modules "$add_on_modules" '.AddOnModules.value |= .+ [$add_on_modules] ' sample.json > "$tmp" && mv "$tmp" sample.json
- cat sample.json
stage: deploy
tags:
- docker
- linux
only:
variables:
- $stage =~ /^deploy$/ && $deployment_mode =~ /^dev$/
I'm giving the value of variable add_on_modules as "a","b" through git ci while running the pipeline. On performing cat sample.json, it's observed to be -
{
"AddOnModules": {
"Description": "add on modules",
"Type": "Array",
"AllowedValues": [
"a",
"b",
"c"
],
"value": [ "\"a\",\"b\""]
}
}
The extra double quotes are getting the prepended and appended while the existing ones are escaped.
I want output something like -
{
"AddOnModules": {
"Description": "add on modules",
"Type": "Array",
"AllowedValues": [
"a",
"b",
"c"
],
"value": ["a","b"]
}
}
Looks like I'm missing something with jq -
- jq -r --arg add_on_modules "$add_on_modules" '.AddOnModules.value |= .+ [$add_on_modules] ' sample.json > "$tmp" && mv "$tmp" sample.json
Tried with -r/--raw-output flag with jq but no success. Any suggestions on how to solve this?
This is how I'm running the pipeline -
Pipeline run
If $add_on_modules is ["a","b"]
If you can set add_on_modules to the textual representation of a JSON array, then you would use --argjson, like so:
add_on_modules='["a","b"]'
jq -r --argjson add_on_modules "$add_on_modules" '
.AddOnModules.value += $add_on_modules
' sample.json
If $add_on_modules is the string "a","b"
add_on_modules='"a","b"'
jq -r --arg add_on_modules "$add_on_modules" '
.AddOnModules.value += ($add_on_modules|split(",")|map(fromjson))
' sample.json

ansible awx/tower not accepting list of values in a variable

I am trying to launch an awx/tower job template passing a list of values in a variable but the task is getting executed only on one target host.
sample request
curl -H "Content-Type: application/json" -X POST -s -u admin:admin123 -d '{ "extra_vars": { "domain": "dom-cn-1", "targets": "dev-cn-c1", "targets": "dev-cn-c2", "fwcmd": "fw sam -v -J src 192.168.10.10" }}' -k https://172.16.102.4/api/v2/job_templates/10/launch/
The above command does not execute as expected and runs on a single host. However, this is working as expected when I run the same playbook via cli.
vars file snip
domain: dom-cn-1
targets:
- dev-cn-c1
- dev-cn-c2
Playbook file
- name: "Create output file"
check_point_mgmt:
command: run-script
parameters:
script-name: "Create output file"
script: "fw sam -v -J src 192.168.10.10"
targets: "{{ targets }}"
session-data: "{{login_response}}"
Let's extract your json in your curl command:
{
"extra_vars": {
"domain": "dom-cn-1",
"targets": "dev-cn-c1",
"targets": "dev-cn-c2",
"fwcmd": "fw sam -v -J src 192.168.10.10"
}
}
You are not passing a list but twice the same parameter with different values. You have to correct you json like this:
{
"extra_vars": {
"domain": "dom-cn-1",
"targets": ["dev-cn-c1", "dev-cn-c2"],
"fwcmd": "fw sam -v -J src 192.168.10.10"
}
}

Changing files (using sed) in Packer script leaves files unchanged

currently I am looking into building a build-pipeline using packer and docker.
This is my packer.json:
{
"builders": [{
"type": "docker",
"image": "php:7.0-apache",
"commit": true
}],
"provisioners": [
{
"type": "file",
"source": "./",
"destination": "/var/www/html/"
},
{
"type": "shell",
"inline": [
"chown -R www-data:www-data /var/www/html",
"sed '/<Directory \\/var\\/www\\/>/,/<\\/Directory>/ s/AllowOverride None/AllowOverride all/' /etc/apache2/apache2.conf",
"sed '/<VirtualHost/,/<\\/VirtualHost>/ s/DocumentRoot \\/var\\/www\\/html/DocumentRoot \\/var\\/www\\/html\\/web/' /etc/apache2/sites-enabled/000-default.conf"
]
}
]
}
The Shell Script inside the provisioners section contains some sed commands for changing the AllowOverride and DocumentRoot variables inside the apache config.
When packer runs this script it is working all fine and I am getting a positive sed output, so sed seems to work fine. But in the docker image the files are unchanged.
Copying the files in the file provisioner is working fine.
What am I doing wrong?
It seems you're missing -i (or --in-place) flag in your sed commands. Try with:
"sed -i <expression> <file>"

Shell command to return value in json output

How to return a particular value using shell command?
In the following example I would like to query to return the value of "StackStatus" which is "CREATE_COMPLETE"
Here is the command:
aws cloudformation describe-stacks --stack-name stackname
Here is the output:
{
"Stacks": [{
"StackId": "arn:aws:cloudformation:ap-southeast-2:64560756805470:stack/stackname/8c8e3330-9f35-1er6-902e-50fae94f3fs42",
"Description": "Creates base IAM roles and policies for platform management",
"Parameters": [{
"ParameterValue": "64560756805470",
"ParameterKey": "PlatformManagementAccount"
}],
"Tags": [],
"CreationTime": "2016-10-31T06:45:02.305Z",
"Capabilities": [
"CAPABILITY_IAM"
],
"StackName": "stackname",
"NotificationARNs": [],
"StackStatus": "CREATE_COMPLETE",
"DisableRollback": false
}]
}
The aws cli supports the --query option to get parts. In addition you could also pipe to another command line tool, jq to do similar query.
But in aws notation to get the 1st result:
aws cloudformation describe-stacks --stack-name stackname --query 'Stacks[0].StackStatus' --output text
Based on the above output, Stacks is an array of objects (a key/value), so hence need the [0] to get the 1st element of the array, and then .StackStatus is a key in this object containing a string as value. The --output text simply presents the output as simple text value instead of a json-looking object.
Edited per Charles comment.

Resources