i need an output similar to this
{
"InstanceType": "c4.xlarge",
"PrivateIpAddress": "10.54.130.52",
"PlatformDetails": "Windows BYOL",
"State":
"Name": "running"
}
}
Reading the documentation of the jq command I have reached the next output:
aws ec2 describe-instances --instance-ids i-0079e143722b0b8f9 | jq -r '.Reservations[].Instances[] | {InstanceType, PrivateIpAddress, PlatformDetails, State}'
{
"InstanceType": "c4.xlarge",
"PrivateIpAddress": "10.54.130.52",
"PlatformDetails": "Windows BYOL",
"State": {
"Code": 16,
"Name": "running"
}
}
Can anyone explain how to do that?
Regards,
This should work:
aws ec2 describe-instances --instance-ids i-0079e143722b0b8f9 | jq -r '.Reservations[].Instances[] | {InstanceType, PrivateIpAddress, PlatformDetails, State: {Name:.State.Name} }'
Related
I am trying to get and export an SSM parameter as an environment variable to an EC2 using the UserData section of Cloudformation.
The script is trying to append for e.g export WHATS_HER_NAME=Sherlyn to the /etc/profile file. But all i see in the /etc/profile is export WHATS_HER_NAME=. The value is not present. I use amazon linux 2 ami.
here is my cloudformation template.
{
"AWSTemplateFormatVersion": "2010-09-09",
"Resources": {
"Ec2Instance": {
"Type": "AWS::EC2::Instance",
"Properties": {
"IamInstanceProfile": {
"Ref": "Ec2instanceProfileTest"
},
"UserData": {
"Fn::Base64": {
"Fn::Join": [
"\n",
[
"#!/bin/bash -xe",
"yum update -y aws-cfn-bootstrap",
{
"Fn::Sub": "/opt/aws/bin/cfn-init -v --stack ${AWS::StackName} --resource Ec2Instance --configsets default --region ${AWS::Region}"
},
{
"Fn::Sub": "/opt/aws/bin/cfn-signal -e $? --stack ${AWS::StackName} --resource EC2Instance --region ${AWS::Region}"
},
{
"Fn::Sub": "echo \"export WHATS_HER_NAME=$(aws ssm get-parameter --name WhatsHerName --region ${AWS::Region} --query 'Parameter.Value')\" >> /etc/profile"
}
]
]
}
}
}
},
"GetSSMParameterPolicy": {
"Type": "AWS::IAM::Policy",
"Properties": {
"PolicyName": "GetSsmProperty",
"PolicyDocument": {
"Statement": [
{
"Effect": "Allow",
"Resource": "arn:aws:ssm:ap-southeast-2:012345678901:parameter/WhatsHerName",
"Action": [
"ssm:GetParameters",
"ssm:GetParameter"
]
},
{
"Effect": "Allow",
"Resource": "*",
"Action": [
"ssm:DescribeParameters"
]
}
]
},
"Roles": [
{
"Ref": "InstanceRole"
}
]
}
},
"InstanceRole": {
"Type": "AWS::IAM::Role",
"Properties": {
"AssumeRolePolicyDocument": {
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": [
"ec2.amazonaws.com"
]
},
"Action": [
"sts:AssumeRole"
]
}
]
},
"Path": "/"
}
},
"BasicParameter": {
"Type": "AWS::SSM::Parameter",
"Properties": {
"Name": "WhatsHerName",
"Type": "String",
"Value": "Sherlyn"
}
}
}
}
any help would be highly appreciated.
I am not a fan of using JSON for CloudFormation templates so I cannot offer the solution in JSON, but here it is in YAML.
UserData:
Fn::Base64: !Sub
- |
#!/bin/bash -xe
yum update -y aws-cfn-bootstrap
/opt/aws/bin/cfn-init -v --stack ${AWS::StackName} --resource Ec2Instance --configsets default --region ${AWS::Region}
/opt/aws/bin/cfn-signal -e $? --stack ${AWS::StackName} --resource EC2Instance --region ${AWS::Region}
echo export WHATS_HER_NAME=${WhatsHerNameParameter} >> /etc/profile
- WhatsHerNameParameter: {{resolve:ssm:WhatsHerName:1}}
You can read more about using AWS Systems Manager Parameter Store Secure String parameters in AWS CloudFormation templates
The snippet above is substituting ${AWS::StackName} and ${AWS::Region} and when it gets to ${WhatsHerNameParameter} it checks for the SSM parameter and substitutes that into the UserData.
This mean that the UserData is complete before it gets to the EC2 instance.
I see two issues:
Your instance doesn't depend on the parameter so the parameter can be created after the instance. When the instance is trying to read the empty parameter, it's just not there. Use DependsOn: [ BasicParameter ].
You didn't include Ec2instanceProfileTest in your sample code. Are you sure it properly uses GetSSMParameterPolicy? If you run that aws ssm get-parameter command after the stack is done, can you get the value properly? If you can't, there might be a permission error. Check the result.
What is working fine
I have successfully created a AWS ECS cluster using terraform script
with following parameters
max_instance_size = 2
min_instance_size = 1
desired_capacity = 1
"maximumPercent": 100,
"minimumHealthyPercent": 0
Everything is working fine when I am creating this Cluster . My application is up and running and accessible through Load Balancer.
What is giving problem
Now i have a Jenkins Job to do following steps
Checkout
Build Application
Create Docker Image
Push Docker image into Hub
Deploy the image through Task Definition update.
Here is the Jenkins snippet
stage("Deploy") {
sh "sed -e 's;%BUILD_TAG%;${BUILD_NUMBER};g' accountupdateecs-task-defination.json > accountupdateecs-task-defination-${BUILD_NUMBER}.json"
def currTaskDef = sh (returnStdout: true,script: "aws ecs describe-task-definition --task-definition ${taskFamily}| egrep 'revision'| tr ',' ' '| awk '{print \$2}'").trim()
def currentTask = sh (returnStdout: true,script: "aws ecs list-tasks --cluster ${clusterName} --family ${taskFamily} --output text | egrep 'TASKARNS' | awk '{print \$2}' ").trim()
if(currTaskDef) {sh "aws ecs update-service --cluster ${clusterName} --service ${serviceName} --task-definition ${taskFamily}:${currTaskDef} --desired-count 0 "}
if (currentTask) {sh "aws ecs stop-task --cluster ${clusterName} --task ${currentTask}"}
sh "aws ecs register-task-definition --family ${taskFamily} --cli-input-json ${taskDefile}"
def taskRevision = sh (returnStdout: true, script: "aws ecs describe-task-definition --task-definition ${taskFamily} | egrep 'revision' | tr ',' ' ' | awk '{print \$2}'").trim()
sh "aws ecs update-service --force-new-deployment --cluster ${clusterName} --service ${serviceName} --task-definition ${taskFamily}:${taskRevision} --desired-count 1"
}
Issue
After successfully execution of job my always i can see in cluster
desired-count = 1
running = 0
and application is not available.
Here is the success log of jenkins
+ aws ecs update-service --force-new-deployment --cluster FinanceManagerCluster --service financemanager-ecs-service --task-definition accountupdateapp:20 --desired-count 1
{
"service": {
"serviceArn": "arn:aws:ecs:us-east-1:3432423423423:service/financemanager-ecs-service",
"serviceName": "financemanager-ecs-service",
"clusterArn": "arn:aws:ecs:us-east-1:3432423423423:cluster/FinanceManagerCluster",
"loadBalancers": [
{
"targetGroupArn": "arn:aws:elasticloadbalancing:us-east-1:3432423423423:targetgroup/ecs-target-group/ed44ae00d0de463d",
"containerName": "accountupdateapp",
"containerPort": 8181
}
],
"status": "ACTIVE",
"desiredCount": 1,
"runningCount": 0,
"pendingCount": 0,
"launchType": "EC2",
"taskDefinition": "arn:aws:ecs:us-east-1:3432423423423:task-definition/accountupdateapp:20",
"deploymentConfiguration": {
"maximumPercent": 100,
"minimumHealthyPercent": 0
},
"deployments": [
{
"id": "ecs-svc/9223370480222949120",
"status": "PRIMARY",
"taskDefinition": "arn:aws:ecs:us-east-1:3432423423423:task-definition/accountupdateapp:20",
"desiredCount": 1,
"pendingCount": 0,
"runningCount": 0,
"createdAt": 1556631826.687,
"updatedAt": 1556631826.687,
"launchType": "EC2"
},
{
"id": "ecs-svc/9223370480223135598",
"status": "ACTIVE",
"taskDefinition": "arn:aws:ecs:us-east-1:3432423423423:task-definition/accountupdateapp:19",
"desiredCount": 0,
"pendingCount": 0,
"runningCount": 0,
"createdAt": 1556631640.195,
"updatedAt": 1556631823.692,
"launchType": "EC2"
}
],
"roleArn": "arn:aws:iam::3432423423423:role/ecs-service-role",
"events": [
{
"id": "967c99cc-5de0-469f-8cdd-adadadad",
"createdAt": 1556631824.549,
"message": "(service financemanager-ecs-service) has begun draining connections on 1 tasks."
},
{
"id": "c4d99570-408a-4ab7-9790-adadadad",
"createdAt": 1556631824.543,
"message": "(service financemanager-ecs-service) deregistered 1 targets in (target-group arn:aws:elasticloadbalancing:us-east-1:3432423423423:targetgroup/ecs-target-group/ed44ae00d0de463d)"
},
{
"id": "bcafa237-598f-4c1d-97e9-adadadad",
"createdAt": 1556631679.467,
"message": "(service financemanager-ecs-service) has reached a steady state."
},
{
"id": "51437232-ed5f-4dbb-b09f-adadadad",
"createdAt": 1556631658.185,
"message": "(service financemanager-ecs-service) registered 1 targets in (target-group arn:aws:elasticloadbalancing:us-east-1:3432423423423:targetgroup/ecs-target-group/ed44ae00d0de463d)"
},
{
"id": "c42ee5c9-db5b-473a-b3ca-adadadad",
"createdAt": 1556631645.944,
"message": "(service financemanager-ecs-service) has started 1 tasks: (task fc04530a-479d-4385-9856-adadadad)."
}
],
"createdAt": 1556631640.195,
"placementConstraints": [],
"placementStrategy": [],
"healthCheckGracePeriodSeconds": 0
}
Need some help to understand and resolve this issue
Thanks in advance.
I'm working on generating a new JSON payload to update Consul with a MSSQL database service location.
When I call jq like this:
mssql_svc_ip=$(kubectl get svc/mssql-linux -o 'jsonpath={.spec.clusterIP}')
mssql_svc_port=$(kubectl get svc/mssql-linux -o 'jsonpath={.spec.ports[0].port}')
jq -n -r --arg MSSQL_IP $mssql_svc_ip --arg MSSQL_PORT $mssql_svc_port '{
"Datacenter": "dev",
"Node": "database",
"Address": $MSSQL_IP,
"Service": {
"Service": "mssql-dev",
"Port": $MSSQL_PORT
}
}'
It produces the proper structure:
{
"Datacenter": "dev",
"Node": "database",
"Address": "10.43.192.146",
"Service": {
"Service": "mssql-dev",
"Port": "1433"
}
}
I need to convert the Service.Port field from a string to an integer as that's what the Consul API requires. I can do that with tonumber, like this:
mssql_svc_ip=$(kubectl get svc/mssql-linux -o 'jsonpath={.spec.clusterIP}')
mssql_svc_port=$(kubectl get svc/mssql-linux -o 'jsonpath={.spec.ports[0].port}')
jq -n -r --arg MSSQL_IP $mssql_svc_ip --arg MSSQL_PORT $mssql_svc_port '{
"Datacenter": "dev",
"Node": "database",
"Address": $MSSQL_IP,
"Service": {
"Service": "mssql-dev",
"Port": tonumber($MSSQL_PORT)
}
}'
However, when I try and convert the $MSSQL_PORT variable to a number, I get this error:
jq: error: tonumber/1 is not defined at <top-level>, line 7:
"Port": tonumber($MSSQL_PORT)
jq: 1 compile error
At first I thought it was an assignment error and the variables weren't being passes as arguments properly, but I've tried a couple iterations and I still get the same error. What am I doing incorrectly?
I think you are misusing the tonumber filter. Based on the documentation it looks like the syntax would be something like:
jq -n -r --arg MSSQL_IP "$mssql_svc_ip" --arg MSSQL_PORT "$mssql_svc_port" '{
"Datacenter": "dev",
"Node": "database",
"Address": $MSSQL_IP,
"Service": {
"Service": "mssql-dev",
"Port": ($MSSQL_PORT|tonumber)
}
}'
And indeed, if $msssql_svc_ip is 10.43.192.146 and $mssql_svc_port is
1433, that gets me:
{
"Datacenter": "dev",
"Node": "database",
"Address": "10.43.192.146",
"Service": {
"Service": "mssql-dev",
"Port": 1433
}
}
Looks like you need to pass the number in with --argjson instead of --arg:
$ jq -n -r --argjson foo 12 '{"foo":$foo}'
{
"foo": 12
}
This seems simpler than using tonumber
I am working on downloading a Docker Image on an internet-connected Windows machine that does not have (and cannot have) Docker installed on it, to transfer to an non-internet-connected Linux machine that does have Docker. I'm using git-bash to run download-frozen-image-v2.sh. Everything is working as expected until the script begins to download the final layer of any given image. On the final layer the json file is being returned empty. Through echo statements, I'm able to see that everything is working flawlessly until lines 119-142
jq "$addJson + ." > "$dir/$layerId/json" <<-'EOJSON'
{
"created": "0001-01-01T00:00:00Z",
"container_config": {
"Hostname": "",
"Domainname": "",
"User": "",
"AttachStdin": false,
"AttachStdout": false,
"AttachStderr": false,
"Tty": false,
"OpenStdin": false,
"StdinOnce": false,
"Env": null,
"Cmd": null,
"Image": "",
"Volumes": null,
"WorkingDir": "",
"Entrypoint": null,
"OnBuild": null,
"Labels": null
}
}
EOJSON
Only on the final layer, this code is resulting in an empty json file, which in-turn creates an error in line 173
jq --raw-output "$imageOldConfig + del(.history, .rootfs)" "$dir/$configFile" > "$dir/$imageId/json"
jq: error: syntax error, unexpected '+', expecting $end (Windows cmd shell quoting issues?) at <top-level>, line 1:
+ del(.history, .rootfs)
jq: 1 compile error
Update
Exact steps to replicate
Perform on Windows 10 computer.
1) Install scoop for Windows https://scoop.sh/
2) in Powershell scoop install git curl jq go tar
3) git-bash
4) in git-bash curl -o download-frozen-image-v2.sh https://raw.githubusercontent.com/moby/moby/master/contrib/download-frozen-image-v2.sh
5) bash download-frozen-image-vs.sh ubuntu ubuntu:latest
The above will result in the aforementioned error.
in response to #peak below
The command I'm using is bash download-frozen-image-v2.sh ubuntu ubuntu:latest which should download 5 layers. The first 4 download flawlessly, it is only the last layer that fails. I tried this process for several other images, and it always fails on the final layer.
addJson:
{ id: "ee6b1042efee4fb07d2fe1a5079ce498567e6f5ac849413f0e623d4582da5bc9", parent: "80a2fb00dfe137a28c24fbc39fde656650cd68028d612e6f33912902d887b108" }
dir/configFile:
ubuntu/113a43faa1382a7404681f1b9af2f0d70b182c569aab71db497e33fa59ed87e6.json
dir/configFile contents:
{
"architecture": "amd64",
"config": {
"Hostname": "",
"Domainname": "",
"User": "",
"AttachStdin": false,
"AttachStdout": false,
"AttachStderr": false,
"Tty": false,
"OpenStdin": false,
"StdinOnce": false,
"Env": [
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
],
"Cmd": [
"/bin/bash"
],
"ArgsEscaped": true,
"Image": "sha256:c2775c69594daa3ee360d8e7bbca93c65d9c925e89bd731f12515f9bf8382164",
"Volumes": null,
"WorkingDir": "",
"Entrypoint": null,
"OnBuild": null,
"Labels": null
},
"container": "6713e927cc43b61a4ce3950a69907336ff55047bae9393256e32613a54321c70",
"container_config": {
"Hostname": "6713e927cc43",
"Domainname": "",
"User": "",
"AttachStdin": false,
"AttachStdout": false,
"AttachStderr": false,
"Tty": false,
"OpenStdin": false,
"StdinOnce": false,
"Env": [
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
],
"Cmd": [
"/bin/sh",
"-c",
"#(nop) ",
"CMD [\"/bin/bash\"]"
],
"ArgsEscaped": true,
"Image": "sha256:c2775c69594daa3ee360d8e7bbca93c65d9c925e89bd731f12515f9bf8382164",
"Volumes": null,
"WorkingDir": "",
"Entrypoint": null,
"OnBuild": null,
"Labels": {}
},
"created": "2018-06-05T21:20:54.310450149Z",
"docker_version": "17.06.2-ce",
"history": [
{
"created": "2018-06-05T21:20:51.286433694Z",
"created_by": "/bin/sh -c #(nop) ADD file:28c0771e44ff530dba3f237024acc38e8ec9293d60f0e44c8c78536c12f13a0b in / "
},
{
"created": "2018-06-05T21:20:52.045074543Z",
"created_by": "/bin/sh -c set -xe \t\t&& echo '#!/bin/sh' > /usr/sbin/policy-rc.d \t&& echo 'exit 101' >> /usr/sbin/policy-rc.d \t&& chmod +x /usr/sbin/policy-rc.d \t\t&& dpkg-divert --local --rename --add /sbin/initctl \t&& cp -a /usr/sbin/policy-rc.d /sbin/initctl \t&& sed -i 's/^exit.*/exit 0/' /sbin/initctl \t\t&& echo 'force-unsafe-io' > /etc/dpkg/dpkg.cfg.d/docker-apt-speedup \t\t&& echo 'DPkg::Post-Invoke { \"rm -f /var/cache/apt/archives/*.deb /var/cache/apt/archives/partial/*.deb /var/cache/apt/*.bin || true\"; };' > /etc/apt/apt.conf.d/docker-clean \t&& echo 'APT::Update::Post-Invoke { \"rm -f /var/cache/apt/archives/*.deb /var/cache/apt/archives/partial/*.deb /var/cache/apt/*.bin || true\"; };' >> /etc/apt/apt.conf.d/docker-clean \t&& echo 'Dir::Cache::pkgcache \"\"; Dir::Cache::srcpkgcache \"\";' >> /etc/apt/apt.conf.d/docker-clean \t\t&& echo 'Acquire::Languages \"none\";' > /etc/apt/apt.conf.d/docker-no-languages \t\t&& echo 'Acquire::GzipIndexes \"true\"; Acquire::CompressionTypes::Order:: \"gz\";' > /etc/apt/apt.conf.d/docker-gzip-indexes \t\t&& echo 'Apt::AutoRemove::SuggestsImportant \"false\";' > /etc/apt/apt.conf.d/docker-autoremove-suggests"
},
{
"created": "2018-06-05T21:20:52.712120056Z",
"created_by": "/bin/sh -c rm -rf /var/lib/apt/lists/*"
},
{
"created": "2018-06-05T21:20:53.405342638Z",
"created_by": "/bin/sh -c sed -i 's/^#\\s*\\(deb.*universe\\)$/\\1/g' /etc/apt/sources.list"
},
{
"created": "2018-06-05T21:20:54.091704323Z",
"created_by": "/bin/sh -c mkdir -p /run/systemd && echo 'docker' > /run/systemd/container"
},
{
"created": "2018-06-05T21:20:54.310450149Z",
"created_by": "/bin/sh -c #(nop) CMD [\"/bin/bash\"]",
"empty_layer": true
}
],
"os": "linux",
"rootfs": {
"type": "layers",
"diff_ids": [
"sha256:db9476e6d963ed2b6042abef1c354223148cdcdbd6c7416c71a019ebcaea0edb",
"sha256:3a89e0d8654e098e949764b1cb23018e27f299b0931c5fd41c207d610ff356c4",
"sha256:904d60939c360b5f528b886c1b534855a008f9a7fd411d4977e09aa7de74c834",
"sha256:a20a262b87bd8a00717f3b30c001bcdaf0fd85d049e6d10500597caa29c013c5",
"sha256:b6f13d447e00fba3b9bd10c1e5c6697e913462f44aa24af349bfaea2054e32f4"
]
}
}
Any help in figuring out what is occurring here would be greatly appreciated.
Thank you.
I can't tell you why this happens but it appears to be a problem with how jq parses the input file. It's segfaulting when reading the file. It's a known issue in the windows builds where the problem is triggered by the length of the paths to the files.
Fortunately, there is a way around this issue by modifying the script to go against all conventional wisdom and cat the file to jq.
The script isn't utilizing jq very well and builds some of the json manually so some additional fixes would be needed. It will have errors regarding INVALID_CHARACTER when parsing. It's probably a manifestation of this issue since the script is manually building a lot of the jq programs.
I put up a gist with the updated file that at least doesn't error out, check to see if it works as expected.
Changes start at line 172 and 342.
The way it builds the manifest is just messy. I've cleaned it up a bit removing all the string interpolations instead passing all parameters in as arguments to jq.
# munge the top layer image manifest to have the appropriate image configuration for older daemons
local imageOldConfig="$(cat "$dir/$imageId/json" | jq --raw-output --compact-output '{ id: .id } + if .parent then { parent: .parent } else {} end')"
cat "$dir/$configFile" | jq --raw-output "$imageOldConfig + del(.history, .rootfs)" > "$dir/$imageId/json"
local manifestJsonEntry="$(
jq --raw-output --compact-output -n \
--arg configFile "$configFile" \
--arg repoTags "${image#library\/}:$tag" \
--argjson layers "$(IFS=$'\n'; jq --arg a "${layerFiles[*]}" -n '$a | split("\n")')" \
'{
Config: $configFile,
RepoTags: [ $repoTags ],
Layers: $layers
}'
)"
(1) I have verified that using bash, the sequence:
addJson='{ id: "ee6b1042efee4fb07d2fe1a5079ce498567e6f5ac849413f0e623d4582da5bc9",
parent: "80a2fb00dfe137a28c24fbc39fde656650cd68028d612e6f33912902d887b108" }'
jq "$addJson + ." configFile > layerId.json
succeeds, where configFile has the contents shown in the updated question.
(2) Similarly, I have verified that the following also succeeds:
imageOldConfig="$(jq --raw-output --compact-output '{ id: .id } + if .parent then { parent: .parent } else {} end' layerId.json)"
jq --raw-output "$imageOldConfig + del(.history, .rootfs)" <<-'EOJSON'
<JSON as in the question>
EOJSON
where <JSON as in the question> stands for the JSON shown in the question.
(3) In general, it is not a good idea to pass shell $-variables into jq programs by shell string interpolation.
For example, rather than writing:
jq --raw-output "$imageOldConfig + del(.history, .rootfs)"
it would be much better to write something like:
jq --raw-output --argjson imageOldConfig "$imageOldConfig" '
$imageOldConfig + del(.history, .rootfs)'
I'm trying to use every Key,Value of an output and pipe it to another command.
Here is what I'm trying to use:
INSTANCE_ID=$(curl http://169.254.169.254/latest/meta-data/instance-id)
aws ec2 describe-tags --filters "Name=resource-id,Values=$INSTANCE_ID"
With the above command, I have the following output:
{
"Tags": [
{
"ResourceType": "instance",
"ResourceId": "i-0342a609edf80001a",
"Value": "A-VALUE",
"Key": "A-KEY"
},
{
"ResourceType": "instance",
"ResourceId": "i-0342a609edf80001a",
"Value": "B-VALUE",
"Key": "B-KEY"
},
{
"ResourceType": "instance",
"ResourceId": "i-0342a609edf80001a",
"Value": "C-VALUE",
"Key": "C-KEY"
},
{
"ResourceType": "instance",
"ResourceId": "i-0342a609edf80001a",
"Value": "D-VALUE",
"Key": "D-KEY"
},
{
"ResourceType": "instance",
"ResourceId": "i-0342a609edf80001a",
"Value": "E-VALUE",
"Key": "E-KEY"
},
{
"ResourceType": "instance",
"ResourceId": "i-0342a609edf80001a",
"Value": "F-VALUE",
"Key": "G-KEY"
},
{
Now I want to pipe each Key,Value to the following command:
aws ec2 create-tags --resources XXXXX --tags Key=H-KEY,Value=H-VALUE
Where the quantity and values of Key,Value are variable. So I believe I need a "for each".
May you help me?
It's like: For each Key,Value, do:
aws ec2 create-tags --resources XXXXX --tags Key=A-KEY,Value=A-VALUE
aws ec2 create-tags --resources XXXXX --tags Key=B-KEY,Value=B-VALUE
aws ec2 create-tags --resources XXXXX --tags Key=C-KEY,Value=C-VALUE
aws ec2 create-tags --resources XXXXX --tags Key=N...-KEY,Value=N...-VALUE
jq has a #sh directive to output values properly quoted for the shell:
aws ec2 describe-tags --filters "Name=resource-id,Values=$INSTANCE_ID" \
| jq -r '.Tags[] | #sh "aws ec2 create-tags --resources XXXXX --tags Key=\(.Key),Value=\(.Value)"'
Given your input, this outputs
aws ec2 create-tags --resources XXXXX --tags Key='A-KEY',Value='A-VALUE'
aws ec2 create-tags --resources XXXXX --tags Key='B-KEY',Value='B-VALUE'
aws ec2 create-tags --resources XXXXX --tags Key='C-KEY',Value='C-VALUE'
aws ec2 create-tags --resources XXXXX --tags Key='D-KEY',Value='D-VALUE'
aws ec2 create-tags --resources XXXXX --tags Key='E-KEY',Value='E-VALUE'
aws ec2 create-tags --resources XXXXX --tags Key='G-KEY',Value='F-VALUE'
To execute those as commands pipe into sh:
aws ec2 describe-tags ... | jq -r ... | sh
jq is quite an adventure. You need to add a "select" filter to remove keys that start with "aws:"
jq -r '
.Tags[] |
select(.Key | test("^aws:") | not) |
#sh "aws ... --tags Key=\(.Key),Value=\(.Value)"
'