Changing files (using sed) in Packer script leaves files unchanged - bash

currently I am looking into building a build-pipeline using packer and docker.
This is my packer.json:
{
"builders": [{
"type": "docker",
"image": "php:7.0-apache",
"commit": true
}],
"provisioners": [
{
"type": "file",
"source": "./",
"destination": "/var/www/html/"
},
{
"type": "shell",
"inline": [
"chown -R www-data:www-data /var/www/html",
"sed '/<Directory \\/var\\/www\\/>/,/<\\/Directory>/ s/AllowOverride None/AllowOverride all/' /etc/apache2/apache2.conf",
"sed '/<VirtualHost/,/<\\/VirtualHost>/ s/DocumentRoot \\/var\\/www\\/html/DocumentRoot \\/var\\/www\\/html\\/web/' /etc/apache2/sites-enabled/000-default.conf"
]
}
]
}
The Shell Script inside the provisioners section contains some sed commands for changing the AllowOverride and DocumentRoot variables inside the apache config.
When packer runs this script it is working all fine and I am getting a positive sed output, so sed seems to work fine. But in the docker image the files are unchanged.
Copying the files in the file provisioner is working fine.
What am I doing wrong?

It seems you're missing -i (or --in-place) flag in your sed commands. Try with:
"sed -i <expression> <file>"

Related

How to extract data from a JSON file into a variable

I have the following json format, basically it is a huge file with several of such entries.
[
{
"id": "kslhe6em",
"version": "R7.8.0.00_BNK",
"hostname": "abacus-ap-hf-test-001:8080",
"status": "RUNNING",
},
{
"id": "2bkaiupm",
"version": "R7.8.0.00_BNK",
"hostname": "abacus-ap-hotfix-001:8080",
"status": "RUNNING",
},
{
"id": "rz5savbi",
"version": "R7.8.0.00_BNK",
"hostname": "abacus-ap-hf-test-005:8080",
"status": "RUNNING",
},
]
I wanted to fetch all the hostname values that starts with "abacus-ap-hf-test" and without ":8080" into a variable and then wanted to use those values for further commands over a for loop something like below. But, am bit confused how can I extract such informaion.
HOSTAME="abacus-ap-hf-test-001 abacus-ap-hf-test-005"
for HOSTANAME in $HOSTNAME
do
sh ./trigger.sh
done
The first line command update to this:
HOSTAME=$(grep -oP 'hostname": "\K(abacus-ap-hf-test[\w\d-]+)' json.file)
or if you sure that the hostname end with :8080", try this:
HOSTAME=$(grep -oP '(?<="hostname": ")abacus-ap-hf-test[\w\d-]+(?=:8080")' json.file)
you will find that abacus-ap-hf-test[\w\d-]+ is the regex, and other strings are the head or the end of the regex content which for finding result accuracy.
Assuming you have valid JSON, you can get the hostname values using jq:
while read -r hname ; do printf "%s\n" "$hname" ; done < <(jq -r .[].hostname j.json)
Output:
abacus-ap-hf-test-001:8080
abacus-ap-hotfix-001:8080
abacus-ap-hf-test-005:8080

bash or powershell parsing jason file content

I would like to manipulate the content of jason file.
I've tried with powershell or linux bash but I was unable to get what I want.
On linux, I was thinking to use jq tool, despite obtains data, I cannot manipulate them.
jq '.[].pathSpec, .[].scope' jasonfilepath
Current output:
"file"
"file"
"/u01/app/grid/*/bin/oracle"
"/u01/app/oracle/product/*/db_1/bin/oracle"
My goal is to obtain something similar as:
scope pathSpec
Like:
file /u01/app/grid/*/bin/oracle
file /u01/app/oracle/product/*/db_1/bin/oracle
Jason file sample
[
{
"actions": [
"upload",
"detect"
],
"deep": false,
"dfi": true,
"dynamic": true,
"inject": false,
"monitor": false,
"pathSpec": "/u01/app/grid/*/bin/oracle",
"scope": "file"
},
{
"actions": [
"upload",
"detect"
],
"deep": false,
"dfi": true,
"dynamic": true,
"inject": false,
"monitor": false,
"pathSpec": "/u01/app/oracle/product/*/db_1/bin/oracle",
"scope": "file"
}
]
Do you have any idea to get this kind of expected output in Powershell and bash?
Thanks by advance,
Assuming a JSON input file named file.json:
In a Linux / Bash environment, use the following:
jq -r '.[] | .scope + " " + .pathSpec' file.json
In PowerShell, use the following (adapted from a comment by JohnLBevan):
(Get-Content -Raw file.json | ConvertFrom-Json) |
ForEach-Object { '{0} {1}' -f $_.scope, $_.pathSpec }
Note the (...) around the pipeline with the ConvertFrom-Json call, which is necessary in Windows PowerShell (but no longer in PowerShell (Core) 7+) to ensure that the parsed JSON array is enumerated in the pipeline, i.e. to ensure that its elements are sent one by one - see this post for more information.

Appending to a configuration file

I am creating a script which updates hosts in an application, the config file for each host looks like that below. The script generates the hosts correctly but I need to append every } with a comma , except the last host.
I have tried numerous things but the closest I have got is putting the hosts content on a single line and running a IFS statement against it. Im also not sure how best to approach this, can anyone advise?
{
"cmd": "ssh user#webserver",
"inTerminal": "new",
"name": "webserver",
"theme": "basic",
"title": "Webserver",
}
example of what I am trying to achieve
{
"cmd": "ssh user#webserver",
"inTerminal": "new",
"name": "webserver",
"theme": "basic",
"title": "Webserver",
},
{
"cmd": "ssh user#db",
"inTerminal": "new",
"name": "db server",
"theme": "basic",
"title": "db",
},
{
"cmd": "ssh user#mail",
"inTerminal": "new",
"name": "mail server",
"theme": "basic",
"title": "mail server",
}
You can do things like:
#!/bin/sh
for f in $(generate-host-list); do
read -d \000 c < "$f"
list="$list${list+,
}$c"
done
echo "$list"
If you are just writing to a file that can be simpler (no need for the read, just cat the file). Similarly, if you don't care about munging whitespace, you could do list="$list${list+,}$(cat "$f"). If you are using bash or some other shells you can do non-portable things like += to clean it up.
You can do it like this:
sed '$q; s/^}$/},/' <in_file >out_file
The above sed command works as follows: First check if you've reached the last
line, and if so quit. Otherwise, it'll check if the only character on the line
is }, and if so replace it with },.

how to execute multiline command in ansible 2.9.10 in fedora 32

I want to execute a command using ansible 2.9.10 in remote machine, first I tried like this:
ansible kubernetes-root -m command -a "cat > /etc/docker/daemon.json <<EOF
{
"exec-opts": ["native.cgroupdriver=systemd"],
"log-driver": "json-file",
"log-opts": {
"max-size": "100m"
},
"storage-driver": "overlay2",
"registry-mirrors":[
"https://kfwkfulq.mirror.aliyuncs.com",
"https://2lqq34jg.mirror.aliyuncs.com",
"https://pee6w651.mirror.aliyuncs.com",
"http://hub-mirror.c.163.com",
"https://docker.mirrors.ustc.edu.cn",
"https://registry.docker-cn.com"
]
}"
obviously it is not working.so I read this guide and tried like this:
- hosts: kubernetes-root
remote_user: root
tasks:
- name: add docker config
shell: >
cat > /etc/docker/daemon.json <<EOF
{
"exec-opts": ["native.cgroupdriver=systemd"],
"log-driver": "json-file",
"log-opts": {
"max-size": "100m"
},
"storage-driver": "overlay2",
"registry-mirrors":[
"https://kfwkfulq.mirror.aliyuncs.com",
"https://2lqq34jg.mirror.aliyuncs.com",
"https://pee6w651.mirror.aliyuncs.com",
"http://hub-mirror.c.163.com",
"https://docker.mirrors.ustc.edu.cn",
"https://registry.docker-cn.com"
]
}
and execute it like this:
[dolphin#MiWiFi-R4CM-srv playboook]$ ansible-playbook add-docker-config.yaml
[WARNING]: Invalid characters were found in group names but not replaced, use
-vvvv to see details
ERROR! We were unable to read either as JSON nor YAML, these are the errors we got from each:
JSON: Expecting value: line 1 column 1 (char 0)
Syntax Error while loading YAML.
could not find expected ':'
The error appears to be in '/home/dolphin/source-share/source/dolphin/dolphin-scripts/ansible/playboook/add-docker-config.yaml': line 7, column 7, but may
be elsewhere in the file depending on the exact syntax problem.
The offending line appears to be:
cat > /etc/docker/daemon.json <<EOF
{
^ here
is there anyway to achive this?how to fix it?
your playbook should work fine, you just have to add some indentation after the shell clause line, and change the > to |:
here is the updated PB:
---
- name: play name
hosts: dell420
gather_facts: false
vars:
tasks:
- name: run shell task
shell: |
cat > /tmp/temp.file << EOF
{
"exec-opts": ["native.cgroupdriver=systemd"],
"log-driver": "json-file",
"log-opts": {
"max-size": "100m"
},
"storage-driver": "overlay2",
"registry-mirrors":[
"https://kfwkfulq.mirror.aliyuncs.com",
"https://2lqq34jg.mirror.aliyuncs.com",
"https://pee6w651.mirror.aliyuncs.com",
"http://hub-mirror.c.163.com",
"https://docker.mirrors.ustc.edu.cn",
"https://registry.docker-cn.com"
]
}
EOF
Not sure what is wrong with the ad-hoc command, i tried a few things but didnt manage to make it work.
hope these help
EDIT:
as pointed out by Zeitounator, the ad-hoc command will work if you use shell module instead of command. example:
ansible -i hosts dell420 -m shell -a 'cat > /tmp/temp.file <<EOF
{
"exec-opts": ["native.cgroupdriver=systemd"],
"log-driver": "json-file",
"log-opts": {
"max-size": "100m"
},
"storage-driver": "overlay2",
"registry-mirrors":[
"https://kfwkfulq.mirror.aliyuncs.com",
"https://2lqq34jg.mirror.aliyuncs.com",
"https://pee6w651.mirror.aliyuncs.com",
"http://hub-mirror.c.163.com",
"https://docker.mirrors.ustc.edu.cn",
"https://registry.docker-cn.com"
]
}
EOF
'

Bad indentation of a sequence entry bitbucket pipelines

I currently have a step in bitbucket pipelines which does some stuff. The last step is to start an aws ecs task, like this:
- step:
name: Migrate database
script:
- curl "https://s3.amazonaws.com/aws-cli/awscli-bundle.zip" -o "awscli-bundle.zip"
- apt-get update
- apt-get install -y unzip python
- unzip awscli-bundle.zip
- ./awscli-bundle/install -b ~/bin/aws
- export PATH=~/bin:$PATH
- aws ecs run-task --cluster test-cluster --task-definition test-task --overrides '{ "containerOverrides": [ { "name": "test-container", "command": [ "echo", "hello world" ], "environment": [ { "name": "APP_ENV", "value": "local" } ] } ] }' --network-configuration '{ "awsvpcConfiguration": { "subnets": ["subnet-xxxxxxx"], "securityGroups": ["sg-xxxxxxx"], "assignPublicIp": "ENABLED" }}' --launch-type FARGATE
This fails the validation with the error:
Bad indentation of a sequence entry bitbucket pipelines
Splitting the statement up on multiple lines is not working either. What would be the correct approach here?
The issue is you have a colon followed by a space, which causes the YAML parser to interpret this as a map and not a string.
The easiest solution would be to move
aws ecs run-task --cluster test-cluster --task-definition test-task --overrides '{ "containerOverrides": [ { "name": "test-container", "command": [ "echo", "hello world" ], "environment": [ { "name": "APP_ENV", "value": "local" } ] } ] }' --network-configuration '{ "awsvpcConfiguration": { "subnets": ["subnet-xxxxxxx"], "securityGroups": ["sg-xxxxxxx"], "assignPublicIp": "ENABLED" }}' --launch-type FARGATE
Into a script file, and call it from Pipelines.
You could also remove all the spaces after any ':' characters. But given the amount of JSON there, you'd likely encounter the same issue again when modifying it. So the script file is probably the easier option here.

Resources