I have this ECS task definition as follow:
{
...
"image": "123.dkr.ecr.us-east-1.amazonaws.com/foo:1.0",
...
"image": "123.dkr.ecr.us-east-1.amazonaws.com/bar:latest",
....
}
I need to replace only the first "image" value, for instance:
{
...
"image": "123.dkr.ecr.us-east-1.amazonaws.com/foo:2.0",
...
"image": "123.dkr.ecr.us-east-1.amazonaws.com/bar:latest",
....
}
Here's my command sed -e "s/.*foo:.*/\"image\":\"${REPO}:${VERSION}\",/" taskdef.json
Where REPO=123.dkr.ecr.us-east-1.amazonaws.com/foo and VERSION=2.0
This is the error I got:
sed: -e expression #1, char 70: unknown option to `s'
This happens because the slash / from REPO variable.
You can use any character as the delimiter for `s' commands in sed, the first character after s will be the delimiter. For example - #
sed -e "s#foo:.*#\"image\":\"${REPO}:${VERSION}\",#" taskdef.json
Will resolve this particular issue (assuming no # in $REPO or $VERSION) as the / will no longer break the pattern.
To replace the value for the first image would be:
$ awk -v repo="$REPO" -v vers="$VERSION" '
!f && ($1~/"image"/) { f=1; sub(/:.*/,""); $0=$0 ": \"" repo ":" vers "\"," } 1
' file
{
...
"image": "123.dkr.ecr.us-east-1.amazonaws.com/foo:2.0",
...
"image": "123.dkr.ecr.us-east-1.amazonaws.com/bar:latest",
....
}
The above would convert escape sequences to their literal characters (e.g. \t to a literal tab character) if they appeared in REPO or VERSION. It's a trivial workaround if that's a possible issue (just set them on the command line or export them then access with ENVIRON[]) and it'll work no matter what other characters appear in the strings since it's using literal string functionality.
The right way with json processor called jq (v1.5):
Sample ECS task definition task.json:
{
"containerDefinitions": [
{
"name": "wordpress",
"links": [
"mysql"
],
"image": "123.dkr.ecr.us-east-1.amazonaws.com/foo:1.0",
"essential": true,
"portMappings": [
{
"containerPort": 80,
"hostPort": 80
}
],
"memory": 500,
"cpu": 10
},
{
"environment": [
{
"name": "MYSQL_ROOT_PASSWORD",
"value": "password"
}
],
"name": "mysql",
"image": "123.dkr.ecr.us-east-1.amazonaws.com/bar:latest",
"cpu": 10,
"memory": 500,
"essential": true
}
],
"family": "hello_world"
}
The job:
jq '.containerDefinitions[0].image = (.containerDefinitions[0].image | sub("1.0$";"2.0"))' task.json
The output:
{
"containerDefinitions": [
{
"name": "wordpress",
"links": [
"mysql"
],
"image": "123.dkr.ecr.us-east-1.amazonaws.com/foo:2.0",
"essential": true,
"portMappings": [
{
"containerPort": 80,
"hostPort": 80
}
],
"memory": 500,
"cpu": 10
},
{
"environment": [
{
"name": "MYSQL_ROOT_PASSWORD",
"value": "password"
}
],
"name": "mysql",
"image": "123.dkr.ecr.us-east-1.amazonaws.com/bar:latest",
"cpu": 10,
"memory": 500,
"essential": true
}
],
"family": "hello_world"
}
Related
using jq i'm trying to add data to a specific element in my json below :
{
"users": [
{
"username": "karim",
"queue": [
"default"
]
},
{
"username": "admin",
"queue": [
"apps",
"prod"
]
}
]
}
what i want to do is to add items in queue[] of user admin like this
{
"users": [
{
"username": "hive",
"queue": [
"default"
]
},
{
"username": "admin",
"queue": [
"apps",
"prod",
"dev"
]
}
]
}
This is the command i used
jq '.users[] | select(.username == "admin").queue += ["dev"]' file.json
But the result is not as expected
{
"username": "hive",
"queue": [
"default"
]
}
{
"username": "admin",
"queue": [
"apps",
"prod",
"dev"
]
}
Why users array doesn't appear ? I need to keep it in the result
With the pipe you are changing the context down to an array element, which is what you want for the selection. If you put parentheses around the pipe and the selection, you will keep the assignment and thus the filter's output on top-level:
jq '(.users[] | select(.username == "admin")).queue += ["dev"]'
{
"users": [
{
"username": "karim",
"queue": [
"default"
]
},
{
"username": "admin",
"queue": [
"apps",
"prod",
"dev"
]
}
]
}
Demo
First of all, sorry for my English, I'm French.
I'm working on a script, which retrieves tags and links from M3U files to store them into variables.
M3U:
#EXTM3U
#EXTINF:-1 tvg-id="TFX.fr" tvg-name="TFX" tvg-country="FR;AD;BE;LU;MC;CH" tvg-language="French" tvg-logo="http://www.exemple.com/image.jpg" group-title="",TFX (720p)
https://tfx-hls-live-ssl.tf1.fr/tfx/1/hls/live_2328.m3u8
script:
#!/bin/bash
tags='#EXTINF:-1 tvg-id="TFX.fr" tvg-name="TFX" tvg-country="FR;AD;BE;LU;MC;CH" tvg-language="French" tvg-logo="http://www.exemple.com/image.jpg" group-title="Fiction",TFX (720p)'
get_chno="$(echo "$tags" | grep -o 'tvg-chno="[^"]*' | cut -d '"' -f2)"
get_id="$(echo "$tags" | grep -o 'tvg-id="[^"]*' | cut -d '"' -f2)"
get_logo="$(echo "$tags" | grep -o 'tvg-logo="[^"]*' | cut -d '"' -f2)"
get_grp_title="$(echo "$tags" | grep -o 'group-title="[^"]*' | cut -d '"' -f2)"
get_title="$(echo "$tags" | grep -o ',[^*]*' | cut -d ',' -f2)"
get_name="$(echo "$tags" | grep -o 'tvg-name="[^"]*' | cut -d '"' -f2)"
get_country="$(echo "$tags" | grep -o 'tvg-country="[^"]*' | cut -d '"' -f2)"
get_language="$(echo "$tags" | grep -o 'tvg-language="[^"]*' | cut -d '"' -f2)"
echo -e "chno:\n $get_chno"
echo -e "id:\n $get_id"
echo -e "logo:\n $get_logo"
echo -e "grp 1:\n $get_grp_title"
echo -e "title:\n $get_title"
echo -e "name:\n $get_name"
echo -e "country:\n $get_country"
echo -e "lang:\n $get_language"
I would like to store these variables in a json file.
This json will be used to rebuild another playlist.
#EXTM3U
#EXTINF:-1 tvg-id="TFX.fr" tvg-name="TFX" tvg-country="FR;AD;BE;LU;MC;CH" tvg-language="French" tvg-logo="http://www.exemple.com/image.jpg" group-title="",TFX (720p)
https://tfx-hls-live-ssl.tf1.fr/tfx/1/hls/live_2328.m3u8
#EXTINF:-1 tvg-id="TFX.fr" tvg-name="TFX" tvg-country="FR;AD;BE;LU;MC;CH" tvg-language="French" tvg-logo="http://127.0.0.1/img/image.jpg" group-title="",TFX (local)
http://127.0.0.1:1234/tfx/live.m3u8
The file which contains multiple arrays and multiple objects.
Like this :
{
"Channels": [
{
"name": "TFX",
"old_name": "NT1",
"logo": "http://www.exemple.com/image.jpg",
"category": "Fiction",
"urls": {
"Official": [
{
"server_name": "TF1",
"IP_address": "8.8.8.8",
"url": "tfx-hls-live-ssl.tf1.fr",
"port": "",
"https_port": "443",
"path": "tfx/1/hls/",
"file_name": "live_2328",
"extension": ".m3u8",
"full_url": "https://tfx-hls-live-ssl.tf1.fr/tfx/1/hls/live_2328.m3u8"
}
],
"Xtream_Servers": [
{
"server_name": "local",
"user_name": "rickey",
"stream_id": "11",
"category_name": "Fiction",
"category_id": "12"
}
]
},
"languages": [
{
"code": "fr",
"name": "Français"
}
],
"countries": [
{
"code": "fr",
"name": "France"
},
{
"code": "be",
"name": "Belgium"
}
],
"tvg": {
"id": "TFX.fr",
"name": "TFX",
"url": ""
}
},
{
"name": "France 2",
"old_name": "",
"logo": "http://www.exemple.com/image.jpg",
"category": "Général",
"urls": {
"Official": [
{
"server_name": "France TV",
"IP_address": "8.8.8.8",
"url": "france2.fr",
"port": "",
"https_port": "443",
"path": "live/",
"file_name": "Playlist",
"extension": ".m3u8",
"full_url": "https://france2.fr/live/Playlist.m3u8"
}
],
"Xtream_Servers": [
{
"server_name": "localhost",
"user_name": "rickey",
"stream_id": "2",
"category_name": "Général",
"category_id": "10"
}
]
},
"languages": [
{
"code": "fr",
"name": "Français"
}
],
"countries": [
{
"code": "fr",
"name": "France"
},
{
"code": "be",
"name": "Belgique"
}
],
"tvg": {
"id": "France2.fr",
"name": "France 2",
"url": ""
}
},
{
"name": "M6",
"old_name": "",
"logo": "http://www.exemple.com/image.jpg",
"category": "Général",
"urls": {
"Official": [
{
"server_name": "6Play",
"IP_address": "8.8.8.8",
"url": "6play.fr",
"port": "",
"https_port": "443",
"path": "live/",
"file_name": "Playlist",
"extension": ".m3u8",
"full_url": "https://6play.fr/M6/live/Playlist.m3u8"
}
],
"Xtream_Servers": [
{
"server_name": "localhost",
"user_name": "rickey",
"stream_id": "6",
"category_name": "Général",
"category_id": "10"
}
]
},
"languages": [
{
"code": "fr",
"name": "Français"
}
],
"countries": [
{
"code": "fr",
"name": "France"
},
{
"code": "be",
"name": "Belgique"
}
],
"tvg": {
"id": "France2.fr",
"name": "France 2",
"url": ""
}
}
],
"Third_Party": {
"Xtream_Servers": [
{
"server_name": "local",
"url": "192.168.1.100",
"port": "8080",
"https_port": "8082",
"server_protocol": "http",
"rtmp_port": "12345",
"Users_list": [
{
"username": "rickey",
"password": "azerty01",
"created_at": "",
"exp_date": "",
"is_trial": "0",
"last_check": "",
"max_connections": "3",
"allowed_output_formats": [
"m3u8",
"ts",
"rtmp"
]
}
]
},
{
"server_name": "localhost",
"url": "127.0.0.1",
"port": "8080",
"https_port": "8082",
"server_protocol": "http",
"rtmp_port": "12345",
"Users_list": [
{
"username": "rickey123",
"password": "azerty321",
"created_at": "",
"exp_date": "",
"is_trial": "0",
"last_check": "",
"max_connections": "3",
"allowed_output_formats": [
"m3u8",
"ts",
"rtmp"
]
},
{
"username": "guest",
"password": "guest01",
"created_at": "",
"exp_date": "",
"is_trial": "1",
"last_check": "",
"max_connections": "1",
"allowed_output_formats": [
"ts"
]
}
]
}
]
}
}
First question: Is it a crappy json?
To add or modify this file, the script must have the entry number (I think, if you have any other ideas, I'm interested...)
cat File.json | jq '.Channels | to_entries[]'
output:
{
"key": 0,
"value": {
"name": "TFX",
"old_name": "NT1",
2nd question:
How to get value key (0 is this case) with the value of "name", for store into variable after ? (to avoid duplicates)
key_="$(cat file.json | jq ????????? search="name": "$get_name" ???? .key)"
echo $key_
"0"
key_2="$(cat file.json | jq ????????? search="name": "$get_url" ???? .key)"
echo $key_2
"0"
if [[ $key_ == $key_2 ]]; then
Chan_Name="$(cat $1 | jq '.Channels[$key_].name)"
Echo $Chan_Name
"TFX"
jq '.[] ????? += {???? , ??? }' file.json | sponge file.json
fi
last question (most important):
How to find and modify these f*** objects, when the script does not know any values of the keys of the objects / arrays ?!
I've been looking for 2 days, my brain is liquid.
Thank you. :)
Edit 1 :
I've found a partial solution to replace value:
{
"name": "TFX",
"old_name": "NT1",
"logo": "http://www.exemple.com/image.jpg",
"category": "Fiction",
with:
cat file.json | jq -C '(.Channels[] | select(.name=="TFX").category="test")'
output:
{
"name": "TFX",
"old_name": "NT1",
"logo": "http://www.exemple.com/image.jpg",
"category": "test",
"urls": {
but "{"Channels": [" is missing. :/
jq -C '(.Channels[] | select(.name=="TFX").category="test")'
You were so close - just one misplaced parenthesis:
jq '(.Channels[] | select(.name=="TFX")) .category="test"'
I would like to udpate a file config.yaml file by inserting some configuration parameters via bash.
The file to be updated looks like:
{
"log": [
{
"format": "plain",
"level": "info",
"output": "stderr"
}
],
"p2p": {
"topics_of_interest": {
"blocks": "normal",
"messages": "low"
},
"trusted_peers": [
{
"address": "/ip4/13.230.137.72/tcp/3000",
"id": "fe3332044877b2034c8632a08f08ee47f3fbea6c64165b3b"
}
]
},
"rest": {
"listen": "127.0.0.1:3100"
}
}
And it needs to look like:
{
"log": [
{
"format": "plain",
"level": "info",
"output": "stderr"
}
],
"storage": "./storage",
"p2p": {
"listen_address":"/ip4/0.0.0.0/tcp/3000",
"public_address":"/ip4/0.0.0.0/tcp/3000",
"topics_of_interest": {
"blocks": "normal",
"messages": "low"
},
"trusted_peers": [
{
"address": "/ip4/13.230.137.72/tcp/3000",
"id": "fe3332044877b2034c8632a08f08ee47f3fbea6c64165b3b"
}
]
},
"rest": {
"listen": "127.0.0.1:3100"
}
}
so adding
on the first level "storage": "./storage",
and on the second level in the p2p section "listen_address":"/ip4/0.0.0.0/tcp/3000", and "public_address":"/ip4/0.0.0.0/tcp/3000",
How do I do this with sed?
For YAML to JSON editor checkout---YAML to JSON editor
If you are certain that your YAML file is written in the JSON subset of YAML, you can use jq:
jq --arg a "/ip4/0.0.0.0/tcp/3000" \
'.storage = "./storage" |
.php += {listen_address: $a, public_address: $a}' config.yaml > tmp &&
mv tmp config.yaml
I have json code and need to filter it by the value of the attribute DNSName. The filter must be case insensitive.
How can I do that? Is there a possibility to solve it with jq?
This is how I create the json code:
aws elbv2 describe-load-balancers --region=us-west-2 | jq
My unfiltered source json code looks like this:
{
"LoadBalancers": [
{
"IpAddressType": "ipv4",
"VpcId": "vpc-abcdabcd",
"LoadBalancerArn": "arn:aws:elasticloadbalancing:us-west-2:000000000000:loadbalancer/app/MY-LB1/a00000000000000a",
"State": {
"Code": "active"
},
"DNSName": "MY-LB1-123454321.us-west-2.elb.amazonaws.com",
"SecurityGroups": [
"sg-00100100",
"sg-01001000",
"sg-10010001"
],
"LoadBalancerName": "MY-LB1",
"CreatedTime": "2018-01-01T00:00:00.000Z",
"Scheme": "internet-facing",
"Type": "application",
"CanonicalHostedZoneId": "ZZZZZZZZZZZZZ",
"AvailabilityZones": [
{
"SubnetId": "subnet-17171717",
"ZoneName": "us-west-2a"
},
{
"SubnetId": "subnet-27272727",
"ZoneName": "us-west-2c"
},
{
"SubnetId": "subnet-37373737",
"ZoneName": "us-west-2b"
}
]
},
{
"IpAddressType": "ipv4",
"VpcId": "vpc-abcdabcd",
"LoadBalancerArn": "arn:aws:elasticloadbalancing:us-west-2:000000000000:loadbalancer/app/MY-LB2/b00000000000000b",
"State": {
"Code": "active"
},
"DNSName": "MY-LB2-9876556789.us-west-2.elb.amazonaws.com",
"SecurityGroups": [
"sg-88818881"
],
"LoadBalancerName": "MY-LB2",
"CreatedTime": "2018-01-01T00:00:00.000Z",
"Scheme": "internet-facing",
"Type": "application",
"CanonicalHostedZoneId": "ZZZZZZZZZZZZZ",
"AvailabilityZones": [
{
"SubnetId": "subnet-54545454",
"ZoneName": "us-west-2a"
},
{
"SubnetId": "subnet-64646464",
"ZoneName": "us-west-2c"
},
{
"SubnetId": "subnet-74747474",
"ZoneName": "us-west-2b"
}
]
}
]
}
I now want some bash code to filter this result for the record with the DNSName property value MY-LB2-9876556789.us-west-2.elb.amazonaws.com, and need the entire LoadBalancer object back as a result. This is how I wish my result to look like:
{
"IpAddressType": "ipv4",
"VpcId": "vpc-abcdabcd",
"LoadBalancerArn": "arn:aws:elasticloadbalancing:us-west-2:000000000000:loadbalancer/app/MY-LB2/b00000000000000b",
"State": {
"Code": "active"
},
"DNSName": "MY-LB2-9876556789.us-west-2.elb.amazonaws.com",
"SecurityGroups": [
"sg-88818881"
],
"LoadBalancerName": "MY-LB2",
"CreatedTime": "2018-01-01T00:00:00.000Z",
"Scheme": "internet-facing",
"Type": "application",
"CanonicalHostedZoneId": "ZZZZZZZZZZZZZ",
"AvailabilityZones": [
{
"SubnetId": "subnet-54545454",
"ZoneName": "us-west-2a"
},
{
"SubnetId": "subnet-64646464",
"ZoneName": "us-west-2c"
},
{
"SubnetId": "subnet-74747474",
"ZoneName": "us-west-2b"
}
]
}
Does anyone know how to do it?
Update:
This solution works, but is not case insensitive:
aws elbv2 describe-load-balancers --region=us-west-2 | jq -c '.LoadBalancers[] | select(.DNSName | contains("MY-LB2"))'
Update:
This solution seems to work even better:
aws elbv2 describe-load-balancers --region=us-west-2 | jq -c '.LoadBalancers[] | select(.DNSName | match("my-lb2";"i"))'
But I did not have the chance to test in detail yet.
You probably should be using test/2 rather than match/2, but in either case, since the problem description calls for
case-insensitive equality, you would use an anchored regex:
.LoadBalancers[]
| select(.DNSName | test("^my-lb2-9876556789.us-west-2.elb.amazonaws.com$";"i"))
With the caveat that ascii_upcase only translates ASCII characters, it might be more efficient to use it:
.LoadBalancers[]
| select(.DNSName | ascii_upcase == "MY-LB2-9876556789.US-WEST-2.ELB.AMAZONAWS.COM")
Rasa NLU version (0.11.3):
Used backend / pipeline (spacy_sklearn):
Operating system (osx):
Issue: I tried to follow the tutorial: https://rasahq.github.io/rasa_nlu/tutorial.html?highlight=project#,
Installed spaCy + sklearn
Created config_spacy.json
Downloaded sample file and train
I've test greeting and goodbye intent and they are work
but when I test with command:
curl -X POST localhost:5000/parse -d '{"q":"I am looking for Mexican food"}' | python -m json.tool
it returns:
{
"intent": {
"name": "None",
"confidence": 1.0
},
"entities": [],
"text": "yes"
}
Content of configuration file (if used & relevant):
{
"project": null,
"fixed_model_name": null,
"config": "config.json",
"data": null,
"emulate": null,
"language": "en",
"log_file": null,
"log_level": "INFO",
"mitie_file": "data/total_word_feature_extractor.dat",
"spacy_model_name": null,
"num_threads": 1,
"max_training_processes": 1,
"path": "/rasa_nlu/projects",
"port": 5000,
"token": null,
"cors_origins": [],
"max_number_of_ngrams": 7,
"pipeline": [],
"response_log": "/rasa_nlu/logs",
"storage": null,
"aws_endpoint_url": null,
"duckling_dimensions": null,
"duckling_http_url": null,
"ner_crf": {
"BILOU_flag": true,
"features": [
[
"low",
"title",
"upper",
"pos",
"pos2"
],
[
"bias",
"low",
"word3",
"word2",
"upper",
"title",
"digit",
"pos",
"pos2",
"pattern"
],
[
"low",
"title",
"upper",
"pos",
"pos2"
]
],
"max_iterations": 50,
"L1_c": 1,
"L2_c": 0.001
},
"intent_classifier_sklearn": {
"C": [
1,
2,
5,
10,
20,
100
],
"kernel": "linear"
}
}
Status:
{
"available_projects": {
"default": {
"status": "ready",
"available_models": [
"fallback"
]
}
}
}
In your config file the pipeline is set to [] but needs to be configured properly. The documentation for the pipeline configuration option can be found here. The available options are discussed here.
The pipeline can either be a pre-configured pipeline like: mitie, spacy_sklearn, or keyword. It can also be a custom pipeline like: ["nlp_spacy", "ner_crf", "ner_synonyms"]. I would recommend setting your pipeline to:
pipeline: "space_sklearn"
Update your configuration file and restart the server. If the server is still running in a console window press Ctrl + c to stop it. Then re-enter the command you used to start it.