Shell command to return value in json output - shell

How to return a particular value using shell command?
In the following example I would like to query to return the value of "StackStatus" which is "CREATE_COMPLETE"
Here is the command:
aws cloudformation describe-stacks --stack-name stackname
Here is the output:
{
"Stacks": [{
"StackId": "arn:aws:cloudformation:ap-southeast-2:64560756805470:stack/stackname/8c8e3330-9f35-1er6-902e-50fae94f3fs42",
"Description": "Creates base IAM roles and policies for platform management",
"Parameters": [{
"ParameterValue": "64560756805470",
"ParameterKey": "PlatformManagementAccount"
}],
"Tags": [],
"CreationTime": "2016-10-31T06:45:02.305Z",
"Capabilities": [
"CAPABILITY_IAM"
],
"StackName": "stackname",
"NotificationARNs": [],
"StackStatus": "CREATE_COMPLETE",
"DisableRollback": false
}]
}

The aws cli supports the --query option to get parts. In addition you could also pipe to another command line tool, jq to do similar query.
But in aws notation to get the 1st result:
aws cloudformation describe-stacks --stack-name stackname --query 'Stacks[0].StackStatus' --output text
Based on the above output, Stacks is an array of objects (a key/value), so hence need the [0] to get the 1st element of the array, and then .StackStatus is a key in this object containing a string as value. The --output text simply presents the output as simple text value instead of a json-looking object.
Edited per Charles comment.

Related

Passing json to aws glue create-job after replacement done using jq

I have the following bash script that I execute in order to create new Glue Job via CLI:
#!/usr/bin/env bash
set -e
NAME=$1
PROFILE=$2
SCRIPT_LOCATION='s3://bucket/scripts/'$1'.py'
echo [*]--- Creating new job on AWS
aws glue create-job --profile $PROFILE --name $NAME --cli-input-json | jq '.Command.ScriptLocation = '\"$SCRIPT_LOCATION\"'' ./resources/config.json
I'm using jq as i need one of the values to be replaced on runtime before i pass the .json as --cli-input-json argument. How can i pass json with replaced value to this command? As of now, it prints out the json content (although with value already replaced).
Running the command above causes the following error:
[*]--- Creating new job on AWS
{
"Description": "Template for Glue Job",
"LogUri": "",
"Role": "arn:aws:iam::11111111111:role/role",
"ExecutionProperty": {
"MaxConcurrentRuns": 1
},
"Command": {
"Name": "glueetl",
"ScriptLocation": "s3://bucket/scripts/script.py",
"PythonVersion": "3"
},
"DefaultArguments": {
"--TempDir": "s3://temp/admin/",
"--job-bookmark-option": "job-bookmark-disable",
"--enable-metrics": "",
"--enable-glue-datacatalog": "",
"--enable-continuous-cloudwatch-log": "",
"--enable-spark-ui": "true",
"--spark-event-logs-path": "s3://assets/sparkHistoryLogs/"
},
"NonOverridableArguments": {
"KeyName": ""
},
"MaxRetries": 0,
"AllocatedCapacity": 0,
"Timeout": 2880,
"MaxCapacity": 0,
"Tags": {
"KeyName": ""
},
"NotificationProperty": {
"NotifyDelayAfter": 60
},
"GlueVersion": "3.0",
"NumberOfWorkers": 2,
"WorkerType": "G.1X"
}
usage: aws [options] <command> <subcommand> [<subcommand> ...] [parameters]
To see help text, you can run:
aws help
aws <command> help
aws <command> <subcommand> help
aws.exe: error: argument --cli-input-json: expected one argument
The command line
aws glue create-job --profile $PROFILE --name $NAME --cli-input-json | jq '.Command.ScriptLocation = '\"$SCRIPT_LOCATION\"'' ./resources/config.json
executes the command
aws glue create-job --profile $PROFILE --name $NAME --cli-input-json,
takes its standard output and uses it as input to
jq '.Command.ScriptLocation = '\"$SCRIPT_LOCATION\"'' ./resources/config.json
(which will ignore the input and read from the file given as argument). Please also note that blanks or spaces in $SCRIPT_LOCATION will break your script, because it is not quoted (your quotes are off).
To use the output of one command in the argument list of another command, you must use Command Substitution: outer_command --some-arg "$(inner_command)".
So your command should become:
aws glue create-job --profile $PROFILE --name $NAME --cli-input-json "$(jq '.Command.ScriptLocation = "'"$SCRIPT_LOCATION"'"' ./resources/config.json)"
# or simplified with only double quotes:
aws glue create-job --profile $PROFILE --name $NAME --cli-input-json "$(jq ".Command.ScriptLocation = \"$SCRIPT_LOCATION\"" ./resources/config.json)"
See https://superuser.com/questions/1306071/aws-cli-using-cli-input-json-in-a-pipeline for additional examples.
Although, I have to admit I am not 100% certain that the JSON content can be passed directly on the command line. From looking at the docs and some official examples, it looks like this parameter expects a file name, not a JSON document's content. So it could be possible that your command in fact needs to be:
# if "-" filename is specially handled:
jq ".Command.ScriptLocation = \"$SCRIPT_LOCATION\"" ./resources/config.json | aws glue create-job --profile $PROFILE --name $NAME --cli-input-json -
# "-" filename not recognized:
jq ".Command.ScriptLocation = \"$SCRIPT_LOCATION\"" ./resources/config.json > ./resources/config.replaced.json && aws glue create-job --profile $PROFILE --name $NAME --cli-input-json file://./resources/config.replaced.json
Let us know which one worked.

Issue with single quotes running Azure CLI command

My script snippet is as below:
End goal to accomplish is to create a Azure DevOps variable group and inject key-values from another variable group into it(the newly created Azure DevOps Variable Group)
set -x
echo "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx" | az devops login --organization https://dev.azure.com/tempcompenergydev
az devops configure --defaults organization=https://dev.azure.com/tempcompenergydev project=Discovery
export target_backend=automation
export target_backend="tempcomp.Sales.Configuration.Spa ${target_backend}"
export new_env="abc"
values=(addressSearchBaseUrl addressSearchSubscriptionKey cacheUrl calendarApiUrl checkoutBffApiUrl cpCode)
az_create_options=""
for ptr in "${values[#]}"
do
result=$(
az pipelines variable-group list --group-name "${target_backend}" | jq '.[0].variables.'${ptr}'.value'
)
printf "%s\t%s\t%d\n" "$ptr" "$result" $?
# add the variable and value to the array
az_create_options="${az_create_options} ${ptr}=${result}"
done
az pipelines variable-group create \
--name "test ${new_env}" \
--variables "${az_create_options}"
However, when the above command executes, I get an unexpected output as below:
+ az pipelines variable-group create --name 'test abc' --variables ' addressSearchBaseUrl="https://qtruat-api.platform.tempcomp.com.au/shared" addressSearchSubscriptionKey="xxxxxxxxxxxxxxxxxxx" cacheUrl="https://tempcompdstqtruat.digital.tempcomp.com.au/app/config" calendarApiUrl="https://qtruat-api.platform.tempcomp.com.au/sales/calendar/v1;rev=deadpool/AvailableDates/EnergyMovers/" checkoutBffApiUrl="https://qtruat-api.platform.tempcomp.com.au/sales/checkout-experience/v1;rev=deadpool/" cpCode="1067076"'
cpCode "1067076" 0
WARNING: Command group 'pipelines variable-group' is in preview and under development. Reference and support levels: https://aka.ms/CLI_refstatus
{
"authorized": false,
"description": null,
"id": 1572,
"name": "test abc",
"providerData": null,
"type": "Vsts",
"variables": {
"addressSearchBaseUrl": {
"isSecret": null,
"value": "\"https://qtruat-api.platform.tempcomp.com.au/shared\" addressSearchSubscriptionKey=\"xxxxxxxxxxxxxxxxxxxxxxxxx\" cacheUrl=\"https://tempcompdstqtruat.digital.tempcomp.com.au/app/config\" calendarApiUrl=\"https://qtruat-api.platform.tempcomp.com.au/sales/calendar/v1;rev=deadpool/AvailableDates/EnergyMovers/\" checkoutBffApiUrl=\"https://qtruat-api.platform.tempcomp.com.au/sales/checkout-experience/v1;rev=deadpool/\" cpCode=\"1067076\""
}
}
}
##[section]Finishing: Bash Script
On a side note, If I run manually, I do get correct response. Example below:
az pipelines variable-group create --name "test abc" --variables addressSearchBaseUr="https://qtruat-api.platform.tempcomp.com.au/shared" addressSearchSubscriptionKey="xxxxxxxxxxxxxxxxxxxxxxxxx" cacheUrl="https://tempcompdstqtruat.digital.tempcomp.com.au/app/config" calendarApiUrl="https://qtruat-api.platform.tempcomp.com.au/sales/calendar/v1;rev=deadpool/AvailableDates/EnergyMovers/" checkoutBffApiUrl="https://qtruat-api.platform.tempcomp.com.au/sales/checkout-experience/v1;rev=deadpool/" cpCode="1067076"
Output:
+ az pipelines variable-group create --name 'test abc' --variables addressSearchBaseUr=https://qtruat-api.platform.tempcomp.com.au/shared addressSearchSubscriptionKey=xxxxxxxxxxxxxxxxxxx cacheUrl=https://tempcompdstqtruat.digital.tempcomp.com.au/app/config 'calendarApiUrl=https://qtruat-api.platform.tempcomp.com.au/sales/calendar/v1;rev=deadpool/AvailableDates/EnergyMovers/' 'checkoutBffApiUrl=https://qtruat-api.platform.tempcomp.com.au/sales/checkout-experience/v1;rev=deadpool/' cpCode=1067076
WARNING: Command group 'pipelines variable-group' is in preview and under development. Reference and support levels: https://aka.ms/CLI_refstatus
{
"authorized": false,
"description": null,
"id": 1573,
"name": "test abc",
"providerData": null,
"type": "Vsts",
"variables": {
"addressSearchBaseUr": {
"isSecret": null,
"value": "https://qtruat-api.platform.tempcomp.com.au/shared"
},
"addressSearchSubscriptionKey": {
"isSecret": null,
"value": "xxxxxxxxxx"
},
"cacheUrl": {
"isSecret": null,
"value": "https://tempcompdstqtruat.digital.tempcomp.com.au/app/config"
},
"calendarApiUrl": {
"isSecret": null,
"value": "https://qtruat-api.platform.tempcomp.com.au/sales/calendar/v1;rev=deadpool/AvailableDates/EnergyMovers/"
},
"checkoutBffApiUrl": {
"isSecret": null,
"value": "https://qtruat-api.platform.tempcomp.com.au/sales/checkout-experience/v1;rev=deadpool/"
},
"cpCode": {
"isSecret": null,
"value": "1067076"
}
}
}
##[section]Finishing: Bash Script
Work Around
I am not a bash pro, but I found a solution that should work for you. I think the core of the issue is that when you print out your array there, it is treating the set of --variables arguments like they're one long string. So you're getting this...
az pipelines variable-group create --name "test ${new_env}" --variables ' addressSearchBaseUrl="val" addressSearchSubscriptionKey="val" ...'
instead of this...
az pipelines variable-group create --name "test ${new_env}" --variables addressSearchBaseUrl="val" addressSearchSubscriptionKey="val" ...
Bad: I think you could get around this using eval instead of printing out arguments but using eval in almost any situation is advised against.
Good: Instead, I thought this problem could be tackled a different way. Instead of doing a batch where all variables are created at once, this script will copy the desired variables to a new group one at a time:
set -x
echo "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx" | az devops login --organization https://dev.azure.com/tempcompenergydev
az devops configure --defaults organization=https://dev.azure.com/tempcompenergydev project=Discovery
export target_backend=automation
export target_backend="tempcomp.Sales.Configuration.Spa ${target_backend}"
export new_env="abc"
values=(addressSearchBaseUrl addressSearchSubscriptionKey cacheUrl calendarApiUrl checkoutBffApiUrl cpCode)
az_create_options=()
for ptr in "${values[#]}"
do
result=$( az pipelines variable-group list --group-name "${target_backend}" | jq '.[0].variables.'${ptr}'.value' )
echo "$result"
# Result from this az command always wraps the response in double quotes. They should be removed.
stripQuote=$( echo $result | sed 's/^.//;s/.$//' )
printf "%s\t%s\t%d\n" "$ptr" "$stripQuote" $?
# add the variable and value to the array
# When adding this way you ensure each key|value pair gets added as a separate element
az_create_options=("${az_create_options[#]}" "${ptr}|${stripQuote}")
# ^ Note on this: The | is used because there needs to be a way to separate
# the key from the value. What you're really trying to do here is an
# Associative Array, but not all versions of bash support that out of the
# box. This is a bit of a hack to get the same result. CAUTION if your
# variable value contains a | character in it, it will cause problems
# during the copy.
done
# Create the new group, save the group id (would be best to add a check to see if the group by this name exists or not first)
groupId=$( az pipelines variable-group create --name "test ${new_env}" --variables bootstrap="start" | jq '.id' )
for var in "${az_create_options[#]}"
do
# Split the key|value pair at the | character and use the set to create the new variable in the new group
# Some check could also be added here to see if the variable already exists in the new group or not if you want to use this create or update an existing group
arrVar=(${var//|/ })
echo "Parsing variable ${arrVar[0]} with val of ${arrVar[1]}"
az pipelines variable-group variable create \
--id "${groupId}" \
--name "${arrVar[0]}" \
--value "${arrVar[1]}"
done
# This is needed cause the az command is dumb and won't let you create a group in the first place without a --variables argument. So I had it create a dummy var and then delete it
az pipelines variable-group variable delete --id "${groupId}" --name "bootstrap" --yes
Notes to call out
I would suggest using an Associative Array instead of the array I listed here if possible. However, Associative Arrays are only supported in bash v4 or higher. Use bash --version to see if you'd be able to use them. See this guide as an example of some ways you could work with those if you're able.
If using my method, be wary that any | character in a variable value that's being copied will cause the script to fail. You may need to pick a different delimiter to split on.
The output of the az pipelines variable-group list command will give the values of the variables wrapped in ". If you try to turn around and throw the exact result you get from the list at a variable create command, you will get a bunch of variables in your group with values like
{
"variable": {
"isSecret": null,
"value": "\”value\”"
}
}
instead of
{
"variable": {
"isSecret": null,
"value": "value"
}
}
The az command is dumb and won't let you create a new variable group in the first place without a --variables argument. So I had it create a dummy var bootstrap="start" and then delete it.
As I mentioned, there may be a better way to print out the array that I'm missing. Comments on my post are encouraged if you know a better way.
This might help....
There is an open issue on how Azure hosted agents authenticate back to ADO using the az-pipelines command.
Feel this is related but if it's not feel free to respond and I'll remove the answer.

How to filter Azure CLI outputs by value using JMESPath queries on Bash when keys contain hyphens/dashes?

The az keyvault secret list --vault-name "lofa" results in a list similar to what is below (but has way more elements of each type):
[
{
"attributes": { ... },
"id": "https://lofa.vault.azure.net/secrets/conn-string",
"tags": {
"file-encoding": "utf-8"
}
},
{
"attributes": { ... },
"id": "https://lofa.vault.azure.net/secrets/a-password",
"tags": null
},
{
"attributes": { ... },
"id": "https://lofa.vault.azure.net/secrets/another-password",
"tags": {
"testThis": "miez",
"what": "else"
}
}
]
Tried to filter the "easy" targets first (i.e., JSON objects where the keys contain no hyphens), and it worked as expected:
$ az keyvault secret list --vault-name "lofa" --query "[? tags.testThis=='vmi']"
The same didn't work for file-encoding keys (resulting in invalid jmespath_type value):
$ az keyvault secret list --vault-name "lofa" --query "[?tags.file-encoding=='utf-8']"
So tried single quotes next, but no joy:
$ az keyvault secret list --vault-name "lofa" --query "[?tags.'file-encoding'=='utf-8']"
And if you do want a solution that does not forces you to escape any quotes, you can use:
simple quotes for your shell script parameters
az keyvault secret list --vault-name 'lofa' --query '[]'
double quotes for your key, since it contains a dash
az keyvault secret list --vault-name 'lofa' --query '[?tags."file-encoding"]'
backticks for your literal, utf-8
az keyvault secret list --vault-name 'lofa' --query '[?tags."file-encoding"==`utf-8`]'
The solution was using
escaped double quotes (\") for JSON object keys (in this case, for file-encoding), and
single quotes for JSON object values (in this case, for utf-8)
az keyvault secret list --vault-name "lofa" --query "[?tags.\"file-encoding\"=='utf-8']"
NOTE: Not sure why this is, or what is happening exactly but when trying to substitute the single quotes with escaped double quotes as well for the value part,
az keyvault secret list --vault-name "lofa" --query "[?tags.\"file-encoding\"==\"utf-8\"]"
it resulted in a list that had all the entries except the ones that had tags.file-encoding.

pass arguments of make commands

I have a sequence of make commands to upload zip file to s3 bucket and then update the lambda function reading that s3 file as source code. Once I update the lambda function, I wish to publish it and after publishing it, I want to attach an event to that lambda function using lambda bridge.
I can do most of these commands automatically using make. For example:
clean:
#rm unwanted_build_files.zip
build-lambda-pkg:
mkdir pkg
cd pkg && docker run #something something
cd pkg && zip -9qr build.zip
cp pkg/build.zip .
rm pkg
upload-s3:
aws s3api put-object --bucket my_bucket \
--key build.zip --body build.zip
update-lambda:
aws lambda update-function-code --function-name my_lambda \
--s3-bucket my_bucket \
--s3-key build.zip
publish-lambda:
aws lambda publish-version --function-name my_lambda
## I can get "Arn" value from publish-lambda command. publish-lambda ##returns a json (or I would say it prints a json type structure on cmd) which has one key as "FunctionArn"
attach-event:
aws events put-targets --rule rstats-post-explaination-at-10pm-ist \
--targets "Id"="1","Arn"="arn:aws:lambda:::function/my_lambda/version_number"
## the following combines the above command into single command
build-n-update: clean build-lambda-pkg upload-s3 update-lambda
I am stuck at the last step i.e. to combine and include publish-lambda and attach-event in the build-n-update command. The problem is I am unable to pass argument from previous command to next command. I will try to explain it better:
publish-lambda prints a json style output on terminal:
{
"FunctionName": "my_lambda",
"FunctionArn": "arn:aws:lambda:us-east-2:12345:function:my_lambda:5",
"Runtime": "python3.6",
"Role": "arn:aws:iam::12345:role/my_role",
"Handler": "lambda_function.lambda_handler",
"CodeSize": 62403592,
"Description": "",
"Timeout": 180,
"MemorySize": 512,
"LastModified": "2021-02-28T17:34:04.374+0000",
"CodeSha256": "ErfsYHVMFCQBg4iXx5ev9Z0U=",
"Version": "5",
"Environment": {
"Variables": {
"PATH": "/var/task/bin",
"PYTHONPATH": "/var/task/src:/var/task/lib"
}
},
"TracingConfig": {
"Mode": "PassThrough"
},
"RevisionId": "49b5-acdd-c1032aa16bfb",
"State": "Active",
"LastUpdateStatus": "Successful"
}
I wish to extract function arn from the above output stored in key "FunctionArn" and use it in the next command i.e. attach-event as attach-event has a --targets argument which takes the "Arn" of last published function.
Is it possible to do in single command?
I have tried to experiment a bit as follows:
build-n-update: clean build-lambda-pkg upload-s3 update-lambda
make publish-lambda | xargs jq .FunctionArn -r {}
But this throws an error:
jq: Unknown option --function-name
Please help
Well, running:
make publish-lambda | xargs jq .FunctionArn -r {}
will print the command to be run, then the output of the command (run it yourself from you shell prompt and see). Of course, jq cannot parse the command line make prints.
Anyway, what would be the goal of this? You'd just print the function name to stdout and it wouldn't do you any good.
You basically have two choices: one is to combine the two commands into a single make recipe, so you can capture the information you need in a shell variable:
build-n-update: clean build-lambda-pkg upload-s3 update-lambda
func=$$(aws lambda publish-version --function-name my_lambda \
| jq .FunctionArn -r); \
aws events put-targets --rule rstats-post-explaination-at-10pm-ist \
--targets "Id"="1","Arn"="$$func"
The other alternative is to redirect the output of publish-version to a file, then parse that file in the attach-event target recipe:
publish-lambda:
aws lambda publish-version --function-name my_lambda > publish.json
attach-event:
aws events put-targets --rule rstats-post-explaination-at-10pm-ist \
--targets "Id"="1","Arn"="$$(jq .FunctionArn -r publish.json)"

Find ec2 instances with improper or missing tags

I am trying to simply output a list of all instance IDs that do not follow a particular tagging convention.
Tag is missing (Tag Keys: Environment or Finance)
Environment Tag value is not one of (prod, stg, test, dev)
Finance Tag value is not one of (GroupA , GroupB)
For (1) I can use the following:
aws ec2 describe-instances --output json --query 'Reservations[*].Instances[?!not_null(Tags[?Key==`Environment`].Value)] | [].InstanceId'
[
"i-12345678901234567",
"i-76543210987654321"
]
But I still need (2) and (3). What if the tag exists but is empty, or has a typo in the value?
"ec2 --query" functionality is limited and I've yet to find a way for it to get me (2) or (3), especially when it comes to inverting results.
I've gone back and forth trying to
modify the output from the CLI to make it easier to parse in JQ
VS
trying to wrangle the output in JQ
For (2) and (3). Here's a pair of outputs from the CLI that I've tried sending to JQ to parse with sample output for 2 instances:
CLI Sample Output [A] Tag.Value and Tag.Key need to be paired when searching, and then negating/inverting a set of searches...
aws ec2 describe-instances --output json --query 'Reservations[].Instances[].{ID:InstanceId, Tag: Tags[]}' | jq '.[]'
{
"Tag": [
{
"Value": "GroupA",
"Key": "Finance"
},
{
"Value": "stg",
"Key": "Environment"
},
{
"Value": "true",
"Key": "Backup"
},
{
"Value": "Another Server",
"Key": "Name"
}
],
"ID": "i-87654321"
}
{
"Tag": [
{
"Value": "GroupB",
"Key": "Finance"
},
{
"Value": "Server 1",
"Key": "Name"
},
{
"Value": "true",
"Key": "Backup"
},
{
"Value": "stg",
"Key": "Environment"
}
],
"ID": "i-12345678"
}
CLI Sample Output [B] Tag value being inside an array has been enough to trigger syntax errors when attempting things like "jq map" or "jq select"
aws ec2 describe-instances --output json --query 'Reservations[].Instances[].{ID:InstanceId, EnvTag: Tags[?Key==`Environment`].Value, FinTag: Tags[?Key==`Finance`].Value}' | jq '.[]'
{
"EnvTag": [
"stg"
],
"ID": "i-87654321",
"FinTag": [
"GroupA"
]
}
{
"EnvTag": [
"stg"
],
"ID": "i-12345678",
"FinTag": [
"GroupB"
]
}
I find most of the time, when I try to expand some solution from a simpler use case, I only ever end up with cryptic syntax errors due to some oddity in the structure of my incoming dataset.
Example Issue 1
Below is an example of how the inverting / negating fails. This is using CLI output B:
aws ec2 describe-instances --output json --query 'Reservations[].Instances[].{ID:InstanceId, EnvTag: Tags[?Key==`Environment`].Value, FinTag: Tags[?Key==`Finance`].Value}' | jq '.[]' | jq 'select(.EnvTag[] | contains ("prod", "dev") | not)'
I would expect the above to return everything except prod and dev. But it looks like the logic is inverted on each item as opposed to the set of contains:
"!A + !B" instead of "!(A or B)"
The resulting dataset returned is a list of everything, including dev and prod.
Example Issue 1.5
I can workaround the logic issue by chaining the contain excludes, but then I discover that "contains" won't work for me as it will pickup typos that still happen to contain the string in question:
aws ec2 describe-instances --output json --query 'Reservations[].Instances[].{ID:InstanceId, EnvTag: Tags[?Key==`Environment`].Value, FinTag: Tags[?Key==`Finance`].Value}' | jq '.[]' | jq 'select(.EnvTag[] | contains ("dev") | not) | select(.EnvTag[] | contains ("stg") | not) | select(.EnvTag[] | contains ("test") | not) | select(.EnvTag[] | contains ("prod") | not) | select (.EnvTag[] | contains ("foo") | not)'
prod != production
"prod" contains("prod") = true
"production" contains ("prod") = true <-- bad :(
I believe I've found a solution.
It can be greatly simplified. First, in this case, there is no need to invoke jq twice. jq '.[]' | jq ... is equivalent to jq '.[] | ...'
Second, the long pipeline of 'select' filters can be condensed, for example to:
select(.EnvTag[]
| (. != "dev" and . != "stg" and . != "prod" and . != "test" and . != "ops"))
or, if your jq has all/2, even more concisely to:
select( . as $in | all( ("dev", "stg", "prod", "test", "ops"); . != $in.EnvTag[]) )
I believe I've found a solution. It may not be optimal, but I've found a way to pipe-chain excludes of exact strings:
aws ec2 describe-instances --output json --query 'Reservations[].Instances[].{ID:InstanceId, EnvTag: Tags[?Key==`Environment`].Value, FinTag: Tags[?Key==`Finance`].Value}' | jq '.[]' | jq 'select(.EnvTag[] != "dev") | select (.EnvTag[] != "stg") | select (.EnvTag[] != "prod") | select (.EnvTag[] != "test") | select (.EnvTag[] != "ops") | .ID'
I verified this by changing an environment tag from "ops" to "oops".
Upon running this query, it returned the single instance with the oops tag.

Resources