jq cannot convert string to int in bash - bash

I'm working on generating a new JSON payload to update Consul with a MSSQL database service location.
When I call jq like this:
mssql_svc_ip=$(kubectl get svc/mssql-linux -o 'jsonpath={.spec.clusterIP}')
mssql_svc_port=$(kubectl get svc/mssql-linux -o 'jsonpath={.spec.ports[0].port}')
jq -n -r --arg MSSQL_IP $mssql_svc_ip --arg MSSQL_PORT $mssql_svc_port '{
"Datacenter": "dev",
"Node": "database",
"Address": $MSSQL_IP,
"Service": {
"Service": "mssql-dev",
"Port": $MSSQL_PORT
}
}'
It produces the proper structure:
{
"Datacenter": "dev",
"Node": "database",
"Address": "10.43.192.146",
"Service": {
"Service": "mssql-dev",
"Port": "1433"
}
}
I need to convert the Service.Port field from a string to an integer as that's what the Consul API requires. I can do that with tonumber, like this:
mssql_svc_ip=$(kubectl get svc/mssql-linux -o 'jsonpath={.spec.clusterIP}')
mssql_svc_port=$(kubectl get svc/mssql-linux -o 'jsonpath={.spec.ports[0].port}')
jq -n -r --arg MSSQL_IP $mssql_svc_ip --arg MSSQL_PORT $mssql_svc_port '{
"Datacenter": "dev",
"Node": "database",
"Address": $MSSQL_IP,
"Service": {
"Service": "mssql-dev",
"Port": tonumber($MSSQL_PORT)
}
}'
However, when I try and convert the $MSSQL_PORT variable to a number, I get this error:
jq: error: tonumber/1 is not defined at <top-level>, line 7:
"Port": tonumber($MSSQL_PORT)
jq: 1 compile error
At first I thought it was an assignment error and the variables weren't being passes as arguments properly, but I've tried a couple iterations and I still get the same error. What am I doing incorrectly?

I think you are misusing the tonumber filter. Based on the documentation it looks like the syntax would be something like:
jq -n -r --arg MSSQL_IP "$mssql_svc_ip" --arg MSSQL_PORT "$mssql_svc_port" '{
"Datacenter": "dev",
"Node": "database",
"Address": $MSSQL_IP,
"Service": {
"Service": "mssql-dev",
"Port": ($MSSQL_PORT|tonumber)
}
}'
And indeed, if $msssql_svc_ip is 10.43.192.146 and $mssql_svc_port is
1433, that gets me:
{
"Datacenter": "dev",
"Node": "database",
"Address": "10.43.192.146",
"Service": {
"Service": "mssql-dev",
"Port": 1433
}
}

Looks like you need to pass the number in with --argjson instead of --arg:
$ jq -n -r --argjson foo 12 '{"foo":$foo}'
{
"foo": 12
}
This seems simpler than using tonumber

Related

az webapp list pull all hostnames for all active webapps

I'm attempting to pull down all the enabledhostnames associated with all of my webapps.
IE, if I had a basic webapp with the following configuration, I would want to print out test1.com and test2.com.
{
"id": "foobar",
"name": "foobar",
"type": "Microsoft.Web/sites",
"kind": "app",
"location": "East US",
"properties": {
"name": "foobar",
"state": "Running",
"hostNames": [
"test1.com",
"test2.com"
],
"webSpace": "kwiecom-EastUSwebspace",
"selfLink": "foobar",
"repositorySiteName": "foobar",
"owner": null,
"usageState": 0,
"enabled": true,
"adminEnabled": true,
"enabledHostNames": [
"test1.com",
"test2.com"
]
}
When I run the following, I just get the number of hostnames associated with each webapp.
az webapp list --resource-group resourcegroup1 --query "[?state=='Running']".{Name:enabledHostNames[*]} --output tsv
The output looks like the following
2
Appreciate any help
Removing --output tsv will result in the hostnames being displayed instead of the number of total, eg:
az webapp list --resource-group resourcegroup1 --query "[?state=='Running']".{Name:enabledHostNames[*]}
The output from this command is:
[
{
"Name": [
"test1.com",
"test2.com"
]
}
]
Not sure if this is the exact output you are looking for. Apologies if you have already considered this.

passing more information to consul watch handler

I am wondering whether consul watch handler can be passed some dynamic information while it's called.
That means watch mechanism can pass the script more arguments instead of my given arguments like the below example.
{
"watches": [
{
"type": "service",
"args": ["/tmp/dosomething.sh", "how can i get responses from /v1/health/service here"]
}
]
}
By the way, when I want to 'watch' a service, the most important info to me is the service's state(passing or critial), but I don't understand:
when watch type is 'service', why I cannot appoint the 'service'.
when watch type is 'checks', why I cannot appoint state and service concurrently.
consul watch passes the entire API response payload as an argument to the watch handler script. Your script needs to be able to consume and parse the JSON, and then act on the data provided.
When you watch a service, the data returned is from the /v1/health/service/:service endpoint. (See consul/api/watch/funcs.go.)
when watch type is 'service', why I cannot appoint the 'service'.
I assume you mean that you would like to watch a specific service. If so, this is supported. You can specify a specific service to watch using the -service flag. For example, consul watch -type=service -service=assets.
when watch type is 'checks', why I cannot appoint state and service concurrently.
If you're interested in monitoring checks for a particular service, you should just use the aforementioned watch command for a specific service. The service check information is included in the API response.
$ consul watch -type=service -service=assets
[
{
"Node": {
"ID": "f013522f-aaa2-8fc6-c8ac-c84cb8a56405",
"Node": "hashicorp-consul-server-2",
"Address": "10.0.0.82",
"Datacenter": "dc2",
"TaggedAddresses": null,
"Meta": null,
"CreateIndex": 22898191,
"ModifyIndex": 22898191
},
"Service": {
"ID": "assets-v1",
"Service": "assets",
"Tags": [],
"Meta": null,
"Port": 9090,
"Address": "",
"Weights": {
"Passing": 1,
"Warning": 1
},
"EnableTagOverride": false,
"CreateIndex": 22898195,
"ModifyIndex": 22898195,
"Proxy": {
"MeshGateway": {},
"Expose": {}
},
"Connect": {}
},
"Checks": [
{
"Node": "hashicorp-consul-server-2",
"CheckID": "serfHealth",
"Name": "Serf Health Status",
"Status": "passing",
"Notes": "",
"Output": "Agent alive and reachable",
"ServiceID": "",
"ServiceName": "",
"ServiceTags": [],
"Type": "",
"Definition": {
"Interval": "0s",
"Timeout": "0s",
"DeregisterCriticalServiceAfter": "0s",
"HTTP": "",
"Header": null,
"Method": "",
"Body": "",
"TLSServerName": "",
"TLSSkipVerify": false,
"TCP": ""
},
"CreateIndex": 22898191,
"ModifyIndex": 22898191
}
]
}
]

Setting SSM parameter as an Environment variable for EC2 - Does not work

I am trying to get and export an SSM parameter as an environment variable to an EC2 using the UserData section of Cloudformation.
The script is trying to append for e.g export WHATS_HER_NAME=Sherlyn to the /etc/profile file. But all i see in the /etc/profile is export WHATS_HER_NAME=. The value is not present. I use amazon linux 2 ami.
here is my cloudformation template.
{
"AWSTemplateFormatVersion": "2010-09-09",
"Resources": {
"Ec2Instance": {
"Type": "AWS::EC2::Instance",
"Properties": {
"IamInstanceProfile": {
"Ref": "Ec2instanceProfileTest"
},
"UserData": {
"Fn::Base64": {
"Fn::Join": [
"\n",
[
"#!/bin/bash -xe",
"yum update -y aws-cfn-bootstrap",
{
"Fn::Sub": "/opt/aws/bin/cfn-init -v --stack ${AWS::StackName} --resource Ec2Instance --configsets default --region ${AWS::Region}"
},
{
"Fn::Sub": "/opt/aws/bin/cfn-signal -e $? --stack ${AWS::StackName} --resource EC2Instance --region ${AWS::Region}"
},
{
"Fn::Sub": "echo \"export WHATS_HER_NAME=$(aws ssm get-parameter --name WhatsHerName --region ${AWS::Region} --query 'Parameter.Value')\" >> /etc/profile"
}
]
]
}
}
}
},
"GetSSMParameterPolicy": {
"Type": "AWS::IAM::Policy",
"Properties": {
"PolicyName": "GetSsmProperty",
"PolicyDocument": {
"Statement": [
{
"Effect": "Allow",
"Resource": "arn:aws:ssm:ap-southeast-2:012345678901:parameter/WhatsHerName",
"Action": [
"ssm:GetParameters",
"ssm:GetParameter"
]
},
{
"Effect": "Allow",
"Resource": "*",
"Action": [
"ssm:DescribeParameters"
]
}
]
},
"Roles": [
{
"Ref": "InstanceRole"
}
]
}
},
"InstanceRole": {
"Type": "AWS::IAM::Role",
"Properties": {
"AssumeRolePolicyDocument": {
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": [
"ec2.amazonaws.com"
]
},
"Action": [
"sts:AssumeRole"
]
}
]
},
"Path": "/"
}
},
"BasicParameter": {
"Type": "AWS::SSM::Parameter",
"Properties": {
"Name": "WhatsHerName",
"Type": "String",
"Value": "Sherlyn"
}
}
}
}
any help would be highly appreciated.
I am not a fan of using JSON for CloudFormation templates so I cannot offer the solution in JSON, but here it is in YAML.
UserData:
Fn::Base64: !Sub
- |
#!/bin/bash -xe
yum update -y aws-cfn-bootstrap
/opt/aws/bin/cfn-init -v --stack ${AWS::StackName} --resource Ec2Instance --configsets default --region ${AWS::Region}
/opt/aws/bin/cfn-signal -e $? --stack ${AWS::StackName} --resource EC2Instance --region ${AWS::Region}
echo export WHATS_HER_NAME=${WhatsHerNameParameter} >> /etc/profile
- WhatsHerNameParameter: {{resolve:ssm:WhatsHerName:1}}
You can read more about using AWS Systems Manager Parameter Store Secure String parameters in AWS CloudFormation templates
The snippet above is substituting ${AWS::StackName} and ${AWS::Region} and when it gets to ${WhatsHerNameParameter} it checks for the SSM parameter and substitutes that into the UserData.
This mean that the UserData is complete before it gets to the EC2 instance.
I see two issues:
Your instance doesn't depend on the parameter so the parameter can be created after the instance. When the instance is trying to read the empty parameter, it's just not there. Use DependsOn: [ BasicParameter ].
You didn't include Ec2instanceProfileTest in your sample code. Are you sure it properly uses GetSSMParameterPolicy? If you run that aws ssm get-parameter command after the stack is done, can you get the value properly? If you can't, there might be a permission error. Check the result.

Docker Image Download with download-frozen-image-v2.sh on Windows

I am working on downloading a Docker Image on an internet-connected Windows machine that does not have (and cannot have) Docker installed on it, to transfer to an non-internet-connected Linux machine that does have Docker. I'm using git-bash to run download-frozen-image-v2.sh. Everything is working as expected until the script begins to download the final layer of any given image. On the final layer the json file is being returned empty. Through echo statements, I'm able to see that everything is working flawlessly until lines 119-142
jq "$addJson + ." > "$dir/$layerId/json" <<-'EOJSON'
{
"created": "0001-01-01T00:00:00Z",
"container_config": {
"Hostname": "",
"Domainname": "",
"User": "",
"AttachStdin": false,
"AttachStdout": false,
"AttachStderr": false,
"Tty": false,
"OpenStdin": false,
"StdinOnce": false,
"Env": null,
"Cmd": null,
"Image": "",
"Volumes": null,
"WorkingDir": "",
"Entrypoint": null,
"OnBuild": null,
"Labels": null
}
}
EOJSON
Only on the final layer, this code is resulting in an empty json file, which in-turn creates an error in line 173
jq --raw-output "$imageOldConfig + del(.history, .rootfs)" "$dir/$configFile" > "$dir/$imageId/json"
jq: error: syntax error, unexpected '+', expecting $end (Windows cmd shell quoting issues?) at <top-level>, line 1:
+ del(.history, .rootfs)
jq: 1 compile error
Update
Exact steps to replicate
Perform on Windows 10 computer.
1) Install scoop for Windows https://scoop.sh/
2) in Powershell scoop install git curl jq go tar
3) git-bash
4) in git-bash curl -o download-frozen-image-v2.sh https://raw.githubusercontent.com/moby/moby/master/contrib/download-frozen-image-v2.sh
5) bash download-frozen-image-vs.sh ubuntu ubuntu:latest
The above will result in the aforementioned error.
in response to #peak below
The command I'm using is bash download-frozen-image-v2.sh ubuntu ubuntu:latest which should download 5 layers. The first 4 download flawlessly, it is only the last layer that fails. I tried this process for several other images, and it always fails on the final layer.
addJson:
{ id: "ee6b1042efee4fb07d2fe1a5079ce498567e6f5ac849413f0e623d4582da5bc9", parent: "80a2fb00dfe137a28c24fbc39fde656650cd68028d612e6f33912902d887b108" }
dir/configFile:
ubuntu/113a43faa1382a7404681f1b9af2f0d70b182c569aab71db497e33fa59ed87e6.json
dir/configFile contents:
{
"architecture": "amd64",
"config": {
"Hostname": "",
"Domainname": "",
"User": "",
"AttachStdin": false,
"AttachStdout": false,
"AttachStderr": false,
"Tty": false,
"OpenStdin": false,
"StdinOnce": false,
"Env": [
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
],
"Cmd": [
"/bin/bash"
],
"ArgsEscaped": true,
"Image": "sha256:c2775c69594daa3ee360d8e7bbca93c65d9c925e89bd731f12515f9bf8382164",
"Volumes": null,
"WorkingDir": "",
"Entrypoint": null,
"OnBuild": null,
"Labels": null
},
"container": "6713e927cc43b61a4ce3950a69907336ff55047bae9393256e32613a54321c70",
"container_config": {
"Hostname": "6713e927cc43",
"Domainname": "",
"User": "",
"AttachStdin": false,
"AttachStdout": false,
"AttachStderr": false,
"Tty": false,
"OpenStdin": false,
"StdinOnce": false,
"Env": [
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
],
"Cmd": [
"/bin/sh",
"-c",
"#(nop) ",
"CMD [\"/bin/bash\"]"
],
"ArgsEscaped": true,
"Image": "sha256:c2775c69594daa3ee360d8e7bbca93c65d9c925e89bd731f12515f9bf8382164",
"Volumes": null,
"WorkingDir": "",
"Entrypoint": null,
"OnBuild": null,
"Labels": {}
},
"created": "2018-06-05T21:20:54.310450149Z",
"docker_version": "17.06.2-ce",
"history": [
{
"created": "2018-06-05T21:20:51.286433694Z",
"created_by": "/bin/sh -c #(nop) ADD file:28c0771e44ff530dba3f237024acc38e8ec9293d60f0e44c8c78536c12f13a0b in / "
},
{
"created": "2018-06-05T21:20:52.045074543Z",
"created_by": "/bin/sh -c set -xe \t\t&& echo '#!/bin/sh' > /usr/sbin/policy-rc.d \t&& echo 'exit 101' >> /usr/sbin/policy-rc.d \t&& chmod +x /usr/sbin/policy-rc.d \t\t&& dpkg-divert --local --rename --add /sbin/initctl \t&& cp -a /usr/sbin/policy-rc.d /sbin/initctl \t&& sed -i 's/^exit.*/exit 0/' /sbin/initctl \t\t&& echo 'force-unsafe-io' > /etc/dpkg/dpkg.cfg.d/docker-apt-speedup \t\t&& echo 'DPkg::Post-Invoke { \"rm -f /var/cache/apt/archives/*.deb /var/cache/apt/archives/partial/*.deb /var/cache/apt/*.bin || true\"; };' > /etc/apt/apt.conf.d/docker-clean \t&& echo 'APT::Update::Post-Invoke { \"rm -f /var/cache/apt/archives/*.deb /var/cache/apt/archives/partial/*.deb /var/cache/apt/*.bin || true\"; };' >> /etc/apt/apt.conf.d/docker-clean \t&& echo 'Dir::Cache::pkgcache \"\"; Dir::Cache::srcpkgcache \"\";' >> /etc/apt/apt.conf.d/docker-clean \t\t&& echo 'Acquire::Languages \"none\";' > /etc/apt/apt.conf.d/docker-no-languages \t\t&& echo 'Acquire::GzipIndexes \"true\"; Acquire::CompressionTypes::Order:: \"gz\";' > /etc/apt/apt.conf.d/docker-gzip-indexes \t\t&& echo 'Apt::AutoRemove::SuggestsImportant \"false\";' > /etc/apt/apt.conf.d/docker-autoremove-suggests"
},
{
"created": "2018-06-05T21:20:52.712120056Z",
"created_by": "/bin/sh -c rm -rf /var/lib/apt/lists/*"
},
{
"created": "2018-06-05T21:20:53.405342638Z",
"created_by": "/bin/sh -c sed -i 's/^#\\s*\\(deb.*universe\\)$/\\1/g' /etc/apt/sources.list"
},
{
"created": "2018-06-05T21:20:54.091704323Z",
"created_by": "/bin/sh -c mkdir -p /run/systemd && echo 'docker' > /run/systemd/container"
},
{
"created": "2018-06-05T21:20:54.310450149Z",
"created_by": "/bin/sh -c #(nop) CMD [\"/bin/bash\"]",
"empty_layer": true
}
],
"os": "linux",
"rootfs": {
"type": "layers",
"diff_ids": [
"sha256:db9476e6d963ed2b6042abef1c354223148cdcdbd6c7416c71a019ebcaea0edb",
"sha256:3a89e0d8654e098e949764b1cb23018e27f299b0931c5fd41c207d610ff356c4",
"sha256:904d60939c360b5f528b886c1b534855a008f9a7fd411d4977e09aa7de74c834",
"sha256:a20a262b87bd8a00717f3b30c001bcdaf0fd85d049e6d10500597caa29c013c5",
"sha256:b6f13d447e00fba3b9bd10c1e5c6697e913462f44aa24af349bfaea2054e32f4"
]
}
}
Any help in figuring out what is occurring here would be greatly appreciated.
Thank you.
I can't tell you why this happens but it appears to be a problem with how jq parses the input file. It's segfaulting when reading the file. It's a known issue in the windows builds where the problem is triggered by the length of the paths to the files.
Fortunately, there is a way around this issue by modifying the script to go against all conventional wisdom and cat the file to jq.
The script isn't utilizing jq very well and builds some of the json manually so some additional fixes would be needed. It will have errors regarding INVALID_CHARACTER when parsing. It's probably a manifestation of this issue since the script is manually building a lot of the jq programs.
I put up a gist with the updated file that at least doesn't error out, check to see if it works as expected.
Changes start at line 172 and 342.
The way it builds the manifest is just messy. I've cleaned it up a bit removing all the string interpolations instead passing all parameters in as arguments to jq.
# munge the top layer image manifest to have the appropriate image configuration for older daemons
local imageOldConfig="$(cat "$dir/$imageId/json" | jq --raw-output --compact-output '{ id: .id } + if .parent then { parent: .parent } else {} end')"
cat "$dir/$configFile" | jq --raw-output "$imageOldConfig + del(.history, .rootfs)" > "$dir/$imageId/json"
local manifestJsonEntry="$(
jq --raw-output --compact-output -n \
--arg configFile "$configFile" \
--arg repoTags "${image#library\/}:$tag" \
--argjson layers "$(IFS=$'\n'; jq --arg a "${layerFiles[*]}" -n '$a | split("\n")')" \
'{
Config: $configFile,
RepoTags: [ $repoTags ],
Layers: $layers
}'
)"
(1) I have verified that using bash, the sequence:
addJson='{ id: "ee6b1042efee4fb07d2fe1a5079ce498567e6f5ac849413f0e623d4582da5bc9",
parent: "80a2fb00dfe137a28c24fbc39fde656650cd68028d612e6f33912902d887b108" }'
jq "$addJson + ." configFile > layerId.json
succeeds, where configFile has the contents shown in the updated question.
(2) Similarly, I have verified that the following also succeeds:
imageOldConfig="$(jq --raw-output --compact-output '{ id: .id } + if .parent then { parent: .parent } else {} end' layerId.json)"
jq --raw-output "$imageOldConfig + del(.history, .rootfs)" <<-'EOJSON'
<JSON as in the question>
EOJSON
where <JSON as in the question> stands for the JSON shown in the question.
(3) In general, it is not a good idea to pass shell $-variables into jq programs by shell string interpolation.
For example, rather than writing:
jq --raw-output "$imageOldConfig + del(.history, .rootfs)"
it would be much better to write something like:
jq --raw-output --argjson imageOldConfig "$imageOldConfig" '
$imageOldConfig + del(.history, .rootfs)'

Get a list of all Mac addresses using ansible

I know that module setup provides Mac addresses per interface, for example:
"ansible_eth0": {
"active": true,
"device": "eth0",
"ipv4": {
"address": "192.168.35.174",
"broadcast": "192.168.35.255",
"netmask": "255.255.255.0",
"network": "192.168.35.0"
},
"ipv6": [
{
"address": "fe80::250:56ff:fe91:a6c2",
"prefix": "64",
"scope": "link"
}
],
"macaddress": "00:50:56:91:a6:c2",
"module": "vmxnet3",
"mtu": 1500,
"pciid": "0000:0b:00.0",
"promisc": false,
"speed": 10000,
"type": "ether"
Suppose the server has 10 interfaces and I want to gather all their mac's, separated with semicolon. How would I do that if I don't know how many interfaces server has and I don't know their names?
Take a look at this answer for complete description.
You may try this:
ansible_interfaces |
map('regex_replace','^','ansible_') |
map('extract',hostvars[inventory_hostname]) |
selectattr('macaddress','defined') |
map(attribute='macaddress') |
list
This expression is not tested, but the idea should be clear.

Resources