$ characters get removed from json - ruby

I am currently improving the deployment process of our service.
In particular, I want to update the revision stored in the opsworks CustomJson stack property as soon as we have deployed a new one.
For this I created a new task in our rakefile. Here is the code:
desc "update revision in custom json of stack"
task :update_stack_revision, [:revision, :stack_id] do |t, arg|
revision = arg[:revision]
stack_id = arg[:stack_id]
# get stack config
stack_description = `aws opsworks \
--region us-east-1 \
describe-stacks \
--stack-id #{stack_id}`
# get the json config
raw_custom_json = JSON.parse(stack_description)["Stacks"][0]["CustomJson"]
# make it parseable by removing invalid chararcters
raw_custom_json = raw_custom_json.gsub(/(\\n)/, '')
raw_custom_json = raw_custom_json.gsub(/(\\")/, '"')
# parse json and update revision
parsed_custom_json = JSON.parse(raw_custom_json)
parsed_custom_json["git"]["revision"] = revision
# transform updated object back into json and bring it into a format required by aws opsworks
updated_json = JSON.generate(parsed_custom_json)
updated_json = updated_json.gsub('"', '\"')
# send update
`aws opsworks \
--region us-east-1 \
update-stack \
--stack-id #{stack_id} \
--custom-json "#{updated_json}"`
end
During this process $ characters are lost for some reason.
I tried reproducing this error by executing each command individually. Apparently the last one - aws opsworks update-stack - is at fault here. I'd really like to know why and how to stop this.

Related

BASH SCRIPT : how to add pre defined parameter value for "amplify add api ''

Here I want to know how to get pre-defined parameters for amplify add api command in this bash script. Actually, I have to write an automation script to create amplify, Rest API, and lambda function using a bash script.
In below code as you can see for
AMPLIFY="{
"projectName":"AmplifyNeelDemo",
"envName":"dev",
"defaultEditor":"code"
}"
pre-defined params are projectName, envName, defaultEditor so like that
I want to get this kind of params for amplify add api for the automation script.
#!/bin/bash
set -e
IFS='|'
REACTCONFIG="{\
\"SourceDir\":\"src\",\
\"DistributionDir\":\"build\",\
\"BuildCommand\":\"npm run-script build\",\
\"StartCommand\":\"npm run-script start\"\
}"
AWSCLOUDFORMATIONCONFIG="{\
\"configLevel\":\"project\",\
\"useProfile\":false,\
\"profileName\":\"default\",\
\"accessKeyId\":\"accesskey\",\
\"secretAccessKey\":\"secaccesskey\",\
\"region\":\"us-east-1\"\
}"
AMPLIFY="{\
\"projectName\":\"AmplifyNeelDemo\",\
\"envName\":\"dev\",\
\"defaultEditor\":\"code\"\
}"
FRONTEND="{\
\"frontend\":\"javascript\",\
\"framework\":\"react\",\
\"config\":$REACTCONFIG\
}"
API="{\
\"service\":\"REST\",\
\"name\":\"myapi\",\
\"path\":\"/hello\",\
\"lambdaFunction\":$LAMBDA_FUN\
}"
LAMBDA_FUN="{\
\"name\":\"Hello-World\",\
\"runtime\":\"NodeJS\",\
\"template\":\"Hello World\"\
}"
PROVIDERS="{\
\"awscloudformation\":$AWSCLOUDFORMATIONCONFIG\
}"
amplify init \
--amplify $AMPLIFY \
--frontend $FRONTEND \
--providers $PROVIDERS \
--yes
amplify add api \
--api $API \
--lambda_function $LAMBDA_FUN \
--no
amplify push -y
amplify add hosting
amplify publish -y

How can i pass the '-t azure://' target into a ruby inspec script?

If in my script I want to test azure resources using a ruby library (not inspec binary) running in a container:
def my_resource_groups
rg = Inspec::Runner.new(conf={:vendor_cache=>'/app'})
rg.add_target('/app/profiles/azure')
rg.run
end
my_resource_groups()
with this inspec.yml definition
name: inspector
title: Azure InSpec Profile
maintainer: The Authors
copyright: The Authors
copyright_email: you#example.com
license: Apache-2.0
summary: An InSpec Compliance Profile For Azure
version: 0.1.0
inspec_version: '>= 2.2.7'
depends:
- name: inspec-azure
url: https://github.com/inspec/inspec-azure/archive/master.tar.gz
And this test:
title "Azure Resource group spike"
control 'azure_resource_groups' do
describe azure_resource_group do
its('names') { should include 'my_resource_group1' }
end
end
I get:
Skipping profile: 'inspector' on unsupported platform: 'debian/10.7'.
How do I pass the equivalent -t azure:// argument to my ruby script, in the same way as I would if I did this:
sudo docker run \
-v /home/vagrant/scratch/share:/share \
-e AZURE_CLIENT_SECRET="some_secret" \
-e AZURE_CLIENT_ID="some_client_id" \
-e AZURE_TENANT_ID="some_tenant_id" \
-e AZURE_SUBSCRIPTION_ID="some_subscription_id" \
chef/inspec \
exec /share/inspector \
-t azure:// \
--chef-license=accept
just in case anyone else comes across this headache, pass the options as a map into the runner object when you instantiate it. (note the vendor cache was tidied up as well)
def my_resource_groups
rg = Inspec::Runner.new({:target=>'azure://',:vendor_cache=>'/app'})
rg.add_target('/app/profiles/azure')
rg.run
end
my_resource_groups()

How do you add spaces for aws cloudformation deploy --parameter-overrides and/or --tags?

I am trying to get spaces into the tags parameter for the aws cli and it works if I hardcode it but not if I use bash variables. What is going on and how do I fix it?
This works with out spaces:
aws cloudformation deploy \
--template-file /path_to_template/template.json \
--stack-name my-new-stack \
--tags Key1=Value1 Key2=Value2
This works with out spaces but with variables:
tags="Key1=Value1 Key2=Value2"
aws cloudformation deploy \
--template-file /path_to_template/template.json \
--stack-name my-new-stack \
--tags $tags
This works with spaces:
aws cloudformation deploy \
--template-file /path_to_template/template.json \
--stack-name my-new-stack \
--tags 'Key1=Value1' 'Key Two=Value2'
This does not work, spaces and variables:
tags="'Key1=Value1' 'Key Two=Value2'"
aws cloudformation deploy \
--template-file /path_to_template/template.json \
--stack-name my-new-stack \
--tags $tags
Attempting to fix bash expansion, also does not work, spaces and variables:
tags="'Key1=Value1' 'Key Two=Value2'"
aws cloudformation deploy \
--template-file /path_to_template/template.json \
--stack-name my-new-stack \
--tags "$tags"
Attempting to fix bash expansion, also does not work, spaces and variables:
tags="'Key1=Value1' 'Key Two=Value2'"
aws cloudformation deploy \
--template-file /path_to_template/template.json \
--stack-name my-new-stack \
--tags "$(printf '%q' $tags)"
Error:
Invalid parameter: Tags Reason: The given tag(s) contain invalid
characters (Service: AmazonSNS; Status Code: 400; Error Code:
InvalidParameter; Request ID
Would you please try:
tags=('Key1=Value1' 'Key Two=Value2')
aws cloudformation deploy \
--template-file /path_to_template/template.json \
--stack-name my-new-stack \
--tags "${tags[#]}"
Stealing some ideas from https://github.com/aws/aws-cli/issues/3274 I was able to get this working by doing the following
deploy=(aws cloudformation deploy
...
--tags $(cat tags.json | jq '.[] | (.Key + "=" + .Value)'))
eval $(echo ${deploy[#]})
With a tags.json file structure of
[
{
"Key": "Name With Spaces",
"Value": "Value With Spaces"
},
{
"Key": "Foo",
"Value": "Bar"
}
]
Try this :
tags="'Key1=Value1' 'Key Two=Value2'"
aws cloudformation deploy \
--template-file /path_to_template/template.json \
--stack-name my-new-stack \
--tags "$tags"
#  ^ ^
#  double quotes
Learn how to quote properly in shell, it's very important :
"Double quote" every literal that contains spaces/metacharacters and every expansion: "$var", "$(command "$var")", "${array[#]}", "a & b". Use 'single quotes' for code or literal $'s: 'Costs $5 US', ssh host 'echo "$HOSTNAME"'. See
http://mywiki.wooledge.org/Quotes
http://mywiki.wooledge.org/Arguments
http://wiki.bash-hackers.org/syntax/words
As of 2022-02 this was still an issue described
here
here also
and a little here
#esolomon is correct you have to array expansion. His answer which works just fine:
deploy=(aws cloudformation deploy
...
--tags $(cat tags.json | jq '.[] | (.Key + "=" + .Value)'))
eval $(echo ${deploy[#]})
The actual problem is a result of the shell environment (bin/bash here) that is used in combination with how python cli executable's handling of values. Since the aws cloudformation deploy does not standardize the input but expects the shell program to standardize array input this was causing my problem.
So my errors with the --debug flag turned on produced the first response which is the error and the second response is the expected input into aws cloudformation deploy
Error input:
2022-02-10 17:32:28,137 - MainThread - awscli.clidriver - DEBUG - Arguments entered to CLI: ['cloudformation', 'deploy', '--region', 'us-east-1', ..., '--parameter-overrides', 'PARAM1=dev PARAM2=blah', '--tags', "TAG1='Test Project' TAG2='blah'...", '--debug']
Expected input:
2022-02-10 17:39:40,390 - MainThread - awscli.clidriver - DEBUG - Arguments entered to CLI: ['cloudformation', 'deploy', '--region', 'us-east-1', ..., '--parameter-overrides', 'PARAM1=dev', 'PARAM2=blah', '--tags', "TAG2='Test Project'", 'TAG2=blah',..., '--debug']
I was unexpectedly sending in a string instead of array of strings this error resulted in several errors depending on how I sent it:
example TAG: TAG1=Test Project
['Project'] value passed to --tags must be of format Key=Value
the means IFS needs to be set to something other than the default ' \t\n', solution below
An error occurred (ValidationError) when calling the CreateChangeSet operation: 1 validation error detected: Value 'Test Project Tag2=Value2 ...' at 'tags.1.member.value' failed to satisfy constraint: Member must have length less than or equal to 256
the error starts after the first = this error means that I am sending in one long string instead of array items, as seen here when doing [*] instead of [#] aws cloudformation deploy ... --tags "${TAGS[*]}" diff between [*] and [#]
To fix this the most important thing was that IFS needed to be set to anything other than ' \t\n' and secondly I still need to do array expansion with [#] and could not input a string. The --parameter-overrides for me did not have this problem even though similar variable loading BECAUSE it did not have a string.
This was my solution, my params and tags input is all over the place, spaces + sometimes arrays + bad indenting thus the sed:
export IFS=$'\n'
# Build up the parameters and Tags
PARAMS=($(jq '.[] | .ParameterKey + "=" + if .ParameterValue|type=="array" then .ParameterValue | join(",") else .ParameterValue end ' parameters-${environment}.json \
| sed -e 's/"//g' \
| sed -e $'s/\r//g' | tr '\n' ' '))
TAGS=("$(jq -r '.[] | [.Key, .Value] | "\(.[0])=\(.[1])"' tags-common.json)")
TAGS=($TAGS "$(jq -r '.[] | [.Key, .Value] | "\(.[0])=\(.[1])"' tags-${environment}.json)")
aws cloudformation deploy \
--region "${REGION}" \
--no-fail-on-empty-changeset \
--template-file "stack-name-cfn-transform.yaml" \
--stack-name "stack-name-${environment}" \
--capabilities CAPABILITY_NAMED_IAM \
--parameter-overrides "${params[#]}" \
--tags "${TAGS[#]}" \
--profile "${PROFILE}"
parameters file
[
{
"ParameterKey": "Environment",
"ParameterValue": "dev"
}
]
tags file - both common and environment specific tag files have same format
[
{
"Key": "TAG1",
"Value": "Test Project"
},
{
"Key": "Name With Spaces",
"Value": "Value With Spaces"
},
{
"Key": "Foo",
"Value": "Bar"
}
]
I resolved this scenario using options below:
"scripts": { "invoke": "sam ... --parameter-overrides \"$(jq -j 'to_entries[] | \"\\(.key)='\\\\\\\"'\\(.value)'\\\\\\\"''\\ '\"' params.json)\"" }
Or
sam ... --parameter-overrides "$(jq -j 'to_entries[] | "\(.key)='\\\"'\(.value)'\\\"''\ '"' params.json)"

How to connect Opentracing application to a remote Jaeger collector

I am using Jaeger UI to display traces from my application. It's work fine for me if both application an Jaeger are running on same server. But I need to run my Jaeger collector on a different server. I tried out with JAEGER_ENDPOINT, JAEGER_AGENT_HOST and JAEGER_AGENT_PORT, but it failed.
I don't know, whether my values setting for these variables is wrong or not. Whether it required any configuration settings inside application code?
Can you provide me any documentation for this problem?
In server 2 , Install jaeger
$ docker run -d --name jaeger \
-p 5775:5775/udp \
-p 6831:6831/udp \
-p 6832:6832/udp \
-p 5778:5778 \
-p 16686:16686 \
-p 14268:14268 \
-p 9411:9411 \
jaegertracing/all-in-one:latest
In server 1, set these environment variables.
JAEGER_SAMPLER_TYPE=probabilistic
JAEGER_SAMPLER_PARAM=1
JAEGER_SAMPLER_MANAGER_HOST_PORT=(EnterServer2HostName):5778
JAEGER_REPORTER_LOG_SPANS=false
JAEGER_AGENT_HOST=(EnterServer2HostName)
JAEGER_AGENT_PORT=6831
JAEGER_REPORTER_FLUSH_INTERVAL=1000
JAEGER_REPORTER_MAX_QUEUE_SIZE=100
application-server-id=server-x
Change the tracer registration application code as below in server 1, so that it will get the configurations from the environment variables.
#Produces
#Singleton
public static io.opentracing.Tracer jaegerTracer() {
String serverInstanceId = System.getProperty("application-server-id");
if(serverInstanceId == null) {
serverInstanceId = System.getenv("application-server-id");
}
return new Configuration("ApplicationName" + (serverInstanceId!=null && !serverInstanceId.isEmpty() ? "-"+serverInstanceId : ""),
Configuration.SamplerConfiguration.fromEnv(),
Configuration.ReporterConfiguration.fromEnv())
.getTracer();
}
Hope this works!
Check this link for integrating elasticsearch as the persistence storage backend so that the traces will not remove once the Jaeger instance is stopped.
How to configure Jaeger with elasticsearch?
Specify "JAEGER_AGENT_HOST" and ensure "local_agent" is not specified in tracer config file.
Below is the working solution for Python
import os
os.environ['JAEGER_AGENT_HOST'] = "123.XXX.YYY.ZZZ" # Specify remote Jaeger-Agent here
# os.environ['JAEGER_AGENT_HOST'] = "16686" # optional, default: "16686"
from jaeger_client import Config
config = Config(
config={
'sampler': {
'type': 'const',
'param': 1,
},
# ENSURE 'local_agent' is not specified
# 'local_agent': {
# # 'reporting_host': "127.0.0.1",
# # 'reporting_port': 16686,
# },
'logging': True,
},
service_name="your-service-name-here",
)
# create tracer object here and voila!
Guidance of Jaeger: https://www.jaegertracing.io/docs/1.33/getting-started/
Jaeger-Client features: https://www.jaegertracing.io/docs/1.33/client-features/
Flask-OpenTracing: https://github.com/opentracing-contrib/python-flask
OpenTelemetry-Python: https://opentelemetry.io/docs/instrumentation/python/getting-started/

aws-cli gateway how to encode newlines in integration response templates invoked from bash

I am using "aws apigateway update-integration-response" from a bash script to update an integration response template. my problem is that newlines did not appear in the web console. I tried all combinations of slash n, back-slash n, even unicode for the newline without success.. below there is the bash command and the output as it appears in aws web console:
bash:
echo "update integration response script mapping for ${CODE} ${2}"
aws apigateway update-integration-response \
--rest-api-id ${APIID} \
--resource-id ${RESOURCEID} \
--http-method ${METHOD} \
--status-code ${CODE} \
--patch-operations \
"op='add',path='/responseTemplates/application~1json',value='#set(\$errorMessageObj = \$util.parseJson(\$input.path(\'\$.errorMessage\')))NEWLINE/nA//nB///nX////nC\nD\\nE\\\nF\\\\n \u000A unicodeA Cg== unicodeB #if(\"\$errorMessageObj.get(\'error-code\')\" != \"\")\n{\n \"error-code\": \"\$errorMessageObj[\'error-code\']\",\n \"error-message\": \"\$errorMessageObj[\'error-message\']\"\n}\\n#else\n{\n \"error-code\": \"AWS\",\n \"error-message\": \"\$input.path(\'\$.errorMessage\')\"\n}\n#end'"
Output in the aws web console:
#set($errorMessageObj = $util.parseJson($input.path('$.errorMessage')))NEWLINE/nA//nB///nX////nC\nD\nE\nF\n \u000A unicodeA Cg== unicodeB #if("$errorMessageObj.get('error-code')" != "")\n{\n "error-code": "$errorMessageObj['error-code']",\n "error-message": "$errorMessageObj['error-message']"\n}\n#else\n{\n "error-code": "AWS",\n "error-message": "$input.path('$.errorMessage')"\n}\n#end

Resources