zenity input file with several lines - bash

I have a problem with zenity I cannot work out. Could you guys help me?
I have a 7 line long tmp3 file:
AAA
BBB
...
FFF
GGG
I want to send this file through zenity so that it displays a checklist with the possibilty to check every line I want with every combination I want.
I previously wrote:
cat tmp3 | zenity --list \
--column='#' \
--text "Select playlist from the list below" \
--title "Please select one or more playlists" \
--multiple \
--width=300 \
--height=300 \
--checklist \
--column "Select" \
--separator="/ ")
All this does is create one single line in zenity with all 7 files of tmp3. Thats not what I want.
I currently wrote this:
choice=$(zenity --list \
--column "Playlists" FALSE $(cat tmp3) \
--text "Select playlist from the list below" \
--title "Please select one or more playlists" \
--multiple \
--width=300 \
--height=300 \
--checklist \
--column "Select" \
--separator="/ ")
Here something really weird happens that I dont understand. 4 out of 7 fields are created in zenity: AAA CCC EEE and GGG. But not the other ones. When I set -x for debugging I can see all 7 lines being piped to zenity... What is happening?????
I tried another solution by listing the 7 subfolders in my current folder (which happen to have the exact same name as the lines in tmp3). The same thing happens!:
I wrote this:
choice=$(zenity --list \
--column "Playlists" FALSE $(ls -d -1 */) \
--text "Select playlist from the list below" \
--title "Please select one or more playlists" \
--multiple \
--width=300 \
--height=300 \
--checklist \
--column "Select" \
--separator="/ ")
The second solution seems easier but my skills aren't very high. And I would like to understand the latter solution and why it does this.
Thank you guys!
EDIT:
I have found this and tried to make it work my way but no success so far...
http://www.linuxquestions.org/questions/programming-9/reading-lines-to-an-array-and-generate-dynamic-zenity-list-881421/

The part FALSE $(cat tmp3) expands to
FALSE AAA
BBB
CCC
DDD
EEE
FFF
GGG
What you need is
FALSE AAA
FALSE BBB
FALSE CCC
FALSE DDD
FALSE EEE
FALSE FFF
FALSE GGG
One way to achieve this is --column "Playlists" $(sed s/^/FALSE\ / tmp3) \

There's an interesting example in man zenity :
zenity \
--list \
--checklist \
--column "Buy" \
--column "Item" \
TRUE Apples \
TRUE Oranges \
FALSE Pears \
FALSE Toothpaste
You just need to turn on a neurone to adapt it a bit =)
EDIT:
if you have an undefined length list, this example will be more interesting :
find . -name '*.h' |
zenity \
--list \
--title "Search Results" \
--text "Finding all header files.." \
--column "Files"

I know I'm kinda late, but wanted just about the same thing, and figured it out in the end.
My solution does a search (hiding errors), adds TRUE and a newline to each result (that was the key!), then sends the result to zenity:
CHECKED=`find /music/folder -name "*.mp3" -type f 2>/dev/null | \
awk '{print "TRUE\n"$0}' | \
zenity --list --checklist --separator='\n' --title="Select Results." \
--text="Finding all MP3 files..." --column="" --column="Files"`
In your situation, I guess this should be:
CHECKED=`cat tmp3 | awk '{print "TRUE\n"$0}' | zenity --list --checklist \
--separator='/ ' --title="Select Results." \
--text="Finding all MP3 files..." --column="" --column="Select"`
So it seems Zenity puts each newline in a column, and fills the list that way. This means you can manipulate the strings going into Zenity to add any number of colums.

In short and clear summary, you have two options:
Option one: Input file, newline separate the columns
Instead of
cat tmp3 | zenity ... ...
do:
sed 's/^/.\n/' tmp3 | zenity ... ...
Option two: Inline command, the colunms are read as pairs from the command args
Instead of
cat tmp3 | zenity ... ...
do:
zenity ... ... `sed 's/^/. /' tmp3`

$ zenity --list --checklist --height 400 --text "Select playlist from the list below" --title "Please select one or more playlists" --column "Playlists" --column "Select" --separator="/ "
$(ls -d -1 */ | xargs -L1 echo FALSE)

Related

Editing lines in .mk file

I would like to edit a .mk file using Bash.
Inside the file, it looks like this:
SRC_PATHS = src/lib \
src/Application \
src/win \
src/prj
I would like to add a new source, which should look like this:
SRC_PATHS = src/lib \
src/Application \
src/win \
src/prj \
src/New
I am trying a sed command, but cannot add a new line.
Note: the last src path (src/prj) is not always the same.
If ed is available/acceptable.
#!/usr/bin/env bash
ed -s file.mk <<-'EOF'
$t.
-1s/$/ \\/
+s|\(^[[:blank:]]\{1,\}\) \(.\{1,\}\)$|\1 scr/new|
,p
Q
EOF
In-one-line
printf '%s\n' '$t.' '-1s/$/ \\/' '+s|\(^[[:blank:]]*\) \(.*\)$|\1 scr/new|' ,p Q | ed -s file.mk
with a shell variable to store the replacement.
#!/usr/bin/env bash
var='scr/new'
ed -s file.mk <<-EOF
\$t.
-1s/\$/ \\\/
+s|\(^[[:blank:]]\{1,\}\) \(.\{1,\}\)\$|\1 $var|
,p
Q
EOF
Remove the ,p to silence the output to stdout , it is there just to see what is the new outcome of the edited buffer.
Change Q to w if in-place editing is needed
JFYI, both the script and the one-liner are not limited to just bash it should work on any POSIX compliant shell.
With sed how about:
sed -i '$s#.\+#& \\'\\$'\n'' src/New#' file.mk
Result:
SRC_PATHS = src/lib \
src/Application \
src/win \
src/prj \
src/New
Considering the indentation of the input and of the desired result, which is not uniform between the first line and the others, I suspect that it is not important at all. If this is the case, then this sed command might work:
sed -z 's#\n$# \\\nsrc/New\n#' file.mk
where
-z is to treat the file as a single line/stream with embedded \ns
\n$ targets the EOF together with the last \n
the replacement string is \\\nsrc/New\n.
Thanks to all who answered, I tried all your suggestions, and here are the code snippets working and applicable to my needs:
sed -i '/^SRC_PATHS[\t ]*=/{:a;/\\$/{N;ba;};s,$, \\\n\tsrc/New,}' file.mk
there are some instances where there is already a "\" in the file, so I added new code to clean up those lines
sed -i '/^$*.\\/d' file.mk
then to add another path in the EOF:
sed -i '$s#.\+#& \\'\\$'\n'' src/New#' file.mk

Sanitize a string for json [duplicate]

I'm using git, then posting the commit message and other bits as a JSON payload to a server.
Currently I have:
MSG=`git log -n 1 --format=oneline | grep -o ' .\+'`
which sets MSG to something like:
Calendar can't go back past today
then
curl -i -X POST \
-H 'Accept: application/text' \
-H 'Content-type: application/json' \
-d "{'payload': {'message': '$MSG'}}" \
'https://example.com'
My real JSON has another couple of fields.
This works fine, but of course when I have a commit message such as the one above with an apostrophe in it, the JSON is invalid.
How can I escape the characters required in bash? I'm not familiar with the language, so am not sure where to start. Replacing ' with \' would do the job at minimum I suspect.
jq can do this.
Lightweight, free, and written in C, jq enjoys widespread community support with over 15k stars on GitHub. I personally find it very speedy and useful in my daily workflow.
Convert string to JSON
echo -n '猫に小判' | jq -Rsa .
# "\u732b\u306b\u5c0f\u5224"
To explain,
-R means "raw input"
-s means "include linebreaks" (mnemonic: "slurp")
-a means "ascii output" (optional)
. means "output the root of the JSON document"
Git + Grep Use Case
To fix the code example given by the OP, simply pipe through jq.
MSG=`git log -n 1 --format=oneline | grep -o ' .\+' | jq -Rsa .`
Using Python:
This solution is not pure bash, but it's non-invasive and handles unicode.
json_escape () {
printf '%s' "$1" | python -c 'import json,sys; print(json.dumps(sys.stdin.read()))'
}
Note that JSON is part of the standard python libraries and has been for a long time, so this is a pretty minimal python dependency.
Or using PHP:
json_escape () {
printf '%s' "$1" | php -r 'echo json_encode(file_get_contents("php://stdin"));'
}
Use like so:
$ json_escape "ヤホー"
"\u30e4\u30db\u30fc"
Instead of worrying about how to properly quote the data, just save it to a file and use the # construct that curl allows with the --data option. To ensure that the output of git is correctly escaped for use as a JSON value, use a tool like jq to generate the JSON, instead of creating it manually.
jq -n --arg msg "$(git log -n 1 --format=oneline | grep -o ' .\+')" \
'{payload: { message: $msg }}' > git-tmp.txt
curl -i -X POST \
-H 'Accept: application/text' \
-H 'Content-type: application/json' \
-d #git-tmp.txt \
'https://example.com'
You can also read directly from standard input using -d #-; I leave that as an exercise for the reader to construct the pipeline that reads from git and produces the correct payload message to upload with curl.
(Hint: it's jq ... | curl ... -d#- 'https://example.com' )
I was also trying to escape characters in Bash, for transfer using JSON, when I came across this. I found that there is actually a larger list of characters that must be escaped – particularly if you are trying to handle free form text.
There are two tips I found useful:
Use the Bash ${string//substring/replacement} syntax described in this thread.
Use the actual control characters for tab, newline, carriage return, etc. In vim you can enter these by typing Ctrl+V followed by the actual control code (Ctrl+I for tab for example).
The resultant Bash replacements I came up with are as follows:
JSON_TOPIC_RAW=${JSON_TOPIC_RAW//\\/\\\\} # \
JSON_TOPIC_RAW=${JSON_TOPIC_RAW//\//\\\/} # /
JSON_TOPIC_RAW=${JSON_TOPIC_RAW//\'/\\\'} # ' (not strictly needed ?)
JSON_TOPIC_RAW=${JSON_TOPIC_RAW//\"/\\\"} # "
JSON_TOPIC_RAW=${JSON_TOPIC_RAW// /\\t} # \t (tab)
JSON_TOPIC_RAW=${JSON_TOPIC_RAW//
/\\\n} # \n (newline)
JSON_TOPIC_RAW=${JSON_TOPIC_RAW//^M/\\\r} # \r (carriage return)
JSON_TOPIC_RAW=${JSON_TOPIC_RAW//^L/\\\f} # \f (form feed)
JSON_TOPIC_RAW=${JSON_TOPIC_RAW//^H/\\\b} # \b (backspace)
I have not at this stage worked out how to escape Unicode characters correctly which is also (apparently) required. I will update my answer if I work this out.
OK, found out what to do. Bash supports this natively as expected, though as always, the syntax isn't really very guessable!
Essentially ${string//substring/replacement} returns what you'd image, so you can use
MSG=${MSG//\'/\\\'}
To do this. The next problem is that the first regex doesn't work anymore, but that can be replaced with
git log -n 1 --pretty=format:'%s'
In the end, I didn't even need to escape them. Instead, I just swapped all the ' in the JSON to \". Well, you learn something every day.
git log -n 1 --format=oneline | grep -o ' .\+' | jq --slurp --raw-input
The above line works for me. refer to
https://github.com/stedolan/jq for more jq tools
I found something like that :
MSG=`echo $MSG | sed "s/'/\\\\\'/g"`
The simplest way is using jshon, a command line tool to parse, read and create JSON.
jshon -s 'Your data goes here.' 2>/dev/null
[...] with an apostrophe in it, the JSON is invalid.
Not according to https://www.json.org. A single quote is allowed in a JSON string.
How can I escape the characters required in bash?
You can use xidel to properly prepare the JSON you want to POST.
As https://example.com can't be tested, I'll be using https://api.github.com/markdown (see this answer) as an example.
Let's assume 'çömmít' "mêssågè" as the exotic output of git log -n 1 --pretty=format:'%s'.
Create the (serialized) JSON object with the value of the "text"-attribute properly escaped:
$ git log -n 1 --pretty=format:'%s' | \
xidel -se 'serialize({"text":$raw},{"method":"json","encoding":"us-ascii"})'
{"text":"'\u00E7\u00F6mm\u00EDt' \"m\u00EAss\u00E5g\u00E8\""}
Curl (variable)
$ eval "$(
git log -n 1 --pretty=format:'%s' | \
xidel -se 'msg:=serialize({"text":$raw},{"method":"json","encoding":"us-ascii"})' --output-format=bash
)"
$ echo $msg
{"text":"'\u00E7\u00F6mm\u00EDt' \"m\u00EAss\u00E5g\u00E8\""}
$ curl -d "$msg" https://api.github.com/markdown
<p>'çömmít' "mêssågè"</p>
Curl (pipe)
$ git log -n 1 --pretty=format:'%s' | \
xidel -se 'serialize({"text":$raw},{"method":"json","encoding":"us-ascii"})' | \
curl -d#- https://api.github.com/markdown
<p>'çömmít' "mêssågè"</p>
Actually, there's no need for curl if you're already using xidel.
Xidel (pipe)
$ git log -n 1 --pretty=format:'%s' | \
xidel -s \
-d '{serialize({"text":read()},{"method":"json","encoding":"us-ascii"})}' \
"https://api.github.com/markdown" \
-e '$raw'
<p>'çömmít' "mêssågè"</p>
Xidel (pipe, in-query)
$ git log -n 1 --pretty=format:'%s' | \
xidel -se '
x:request({
"post":serialize(
{"text":$raw},
{"method":"json","encoding":"us-ascii"}
),
"url":"https://api.github.com/markdown"
})/raw
'
<p>'çömmít' "mêssågè"</p>
Xidel (all in-query)
$ xidel -se '
x:request({
"post":serialize(
{"text":system("git log -n 1 --pretty=format:'\''%s'\''")},
{"method":"json","encoding":"us-ascii"}
),
"url":"https://api.github.com/markdown"
})/raw
'
<p>'çömmít' "mêssågè"</p>
This is an escaping solution using Perl that escapes backslash (\), double-quote (") and control characters U+0000 to U+001F:
$ echo -ne "Hello, 🌵\n\tBye" | \
perl -pe 's/(\\(\\\\)*)/$1$1/g; s/(?!\\)(["\x00-\x1f])/sprintf("\\u%04x",ord($1))/eg;'
Hello, 🌵\u000a\u0009Bye
I struggled with the same problem. I was trying to add a variable on the payload of cURL in bash and it kept returning as invalid_JSON. After trying a LOT of escaping tricks, I reached a simple method that fixed my issue. The answer was all in the single and double quotes:
curl --location --request POST 'https://hooks.slack.com/services/test-slack-hook' \
--header 'Content-Type: application/json' \
--data-raw '{"text":'"$data"'}'
Maybe it comes in handy for someone!
I had the same idea to send a message with commit message after commit.
First i tryed similar was as autor here.
But later found a better and simpler solution.
Just created php file which is sending message and call it with wget.
in hooks/post-receive :
wget -qO - "http://localhost/git.php"
in git.php:
chdir("/opt/git/project.git");
$git_log = exec("git log -n 1 --format=oneline | grep -o ' .\+'");
And then create JSON and call CURL in PHP style
Integrating a JSON-aware tool in your environment is sometimes a no-go, so here's a POSIX solution that should work on every UNIX/Linux:
json_stringify() {
[ "$#" -ge 1 ] || return 1
LANG=C awk '
BEGIN {
for ( i = 1; i <= 127; i++ )
repl[ sprintf( "%c", i) ] = sprintf( "\\u%04x", i )
for ( i = 1; i < ARGC; i++ ) {
s = ARGV[i]
printf("%s", "\"")
while ( match( s, /[\001-\037\177"\\]/ ) ) {
printf("%s%s", \
substr(s,1,RSTART-1), \
repl[ substr(s,RSTART,RLENGTH) ] \
)
s = substr(s,RSTART+RLENGTH)
}
print s "\""
}
exit
}
' "$#"
}
Or using the widely available perl:
json_stringify() {
[ "$#" -ge 1 ] || return 1
LANG=C perl -le '
for (#ARGV) {
s/[\x00-\x1f\x7f"\\]/sprintf("\\u%04x",ord($0))/ge;
print "\"$_\""
}
' -- "$#"
}
Then you can do:
json_stringify '"foo\bar"' 'hello
world'
"\u0022foo\bar\u0022"
"hello\u000aworld"
limitations:
Doesn't handle NUL bytes.
Doesn't validate the input for UNICODE, it only escapes the mandatory ASCII characters specified in the RFC 8259.
Replying to OP's question:
MSG=$(git log -n 1 --format=oneline | grep -o ' .\+')
curl -i -X POST \
-H 'Accept: application/text' \
-H 'Content-type: application/json' \
-d '{"payload": {"message": '"$(json_stringify "$MSG")"'}}' \
'https://example.com'

Remove space between new line output

I am trying to capture the output in one of the file using
cat <<EOF> /var/log/awsmetadata.log
timestamp= $TIME, \
region= $REGION, \
instanceIp= $INSTANCE_IP, \
availabilityZone= $INSTANCE_AZ, \
instanceType= $INSTANCE_TYPE, \
EOF
Where the output created in the format of
cat /var/log/awsmeta.log
timestamp= 2020-11-04 18:51:17, region= us-west-2, instanceIp= 1.2.3.4, availabilityZone= us-west-2a,
How can i eliminate the wide spaces between each output line?
If you don't want redundant whitespaces simply do not add them:
$ cat <<EOF> /var/log/awsmetadata.log
> timestamp= $TIME, \
> region= $REGION, \
> instanceIp= $INSTANCE_IP, \
> availabilityZone= $INSTANCE_AZ, \
> instanceType= $INSTANCE_TYPE
> EOF
I often use sed or tr instead of cat for this sort of thing:
tr -s ' ' <<EOF > /var/log/awsmetadata.log
timestamp= $TIME, \
region= $REGION, \
instanceIp= $INSTANCE_IP, \
availabilityZone= $INSTANCE_AZ, \
instanceType= $INSTANCE_TYPE,
EOF
But it seems cleaner to not escape the newlines at all and do something like:
{ tr -d \\n <<-EOF; echo; } > /var/log/awsmetadata.log
timestamp= $TIME,
region= $REGION,
instanceIp= $INSTANCE_IP,
availabilityZone= $INSTANCE_AZ,
instanceType= $INSTANCE_TYPE,
EOF
(That solution uses the <<- form of the heredoc which redacts hardtabbed indenation. It will not remove leading spaces.)
OTOH, it seems weird to be using a here doc when you're just wanting to generate one line of output. Why not just use echo?

How do I replace template variables in text file with data in bash script

I have a template file like show below. I have a number of variables in it that I want to replace with values I peel off of a JSON doc. I'm able to do it with sed on the few simple ones, but I have problems doing it on <ARN> and others like that.
#test "Test <SCENARIO_NAME>--<EXPECTED_ACTION>" {
<SKIP_BOOLEAN>
testfile="data/<FILE_NAME>"
assert_file_exist $testfile
IBP_JSON=$(cat $testfile)
run aws iam simulate-custom-policy \
--resource-arns \
"<ARN>"
--action-names \
"<ACTION_NAMES>"
--context-entries \
"ContextKeyName='aws:PrincipalTag/Service', \
ContextKeyValues='svc1', \
ContextKeyType=string" \
"ContextKeyName='aws:PrincipalTag/Department', \
ContextKeyValues='shipping', \
ContextKeyType=string" \
<EXTRA_CONTEXT_KEYS>
--policy-input-list "${IBP_JSON}"
assert_success
<TEST_EXPRESSION>
}
I want the <ARN> placeholder to be replaced with the following text:
"arn:aws:ecs:*:588068252125:cluster/${aws:PrincipalTag/Service}-*" \
"arn:aws:ecs:*:588068252125:task/${aws:PrincipalTag/Service}-*" \
"arn:aws:ecs:*:588068252125:container-instance/${aws:PrincipalTag/Service}-*" \
"arn:aws:ecs:*:588068252125:task-definition/${aws:PrincipalTag/Service}-*:*" \
"arn:aws:ecs:*:588068252125:service/${aws:PrincipalTag/Service}-*" \
How can I do that replacement while also preserving the formatting (\ and /r at line ends)?
The easiest is use bash itself:
original=$(cat file.txt)
read -r -d '' replacement <<'EOF'
"arn:aws:ecs:*:588068252125:cluster/${aws:PrincipalTag/Service}-*" \
"arn:aws:ecs:*:588068252125:task/${aws:PrincipalTag/Service}-*" \
"arn:aws:ecs:*:588068252125:container-instance/${aws:PrincipalTag/Service}-*" \
"arn:aws:ecs:*:588068252125:task-definition/${aws:PrincipalTag/Service}-*:*" \
"arn:aws:ecs:*:588068252125:service/${aws:PrincipalTag/Service}-*" \
EOF
placeholder='"<ARN>"'
modified=${original/$placeholder/$replacement}
echo "$modified"
Look for ${parameter/pattern/string} in man bash.
If input.txt is the input file and replace.txt contains the replacement text:
$ cat input.txt
run aws iam simulate-custom-policy \
--resource-arns \
"<ARN>"
--action-names \
"<ACTION_NAMES>"
$ cat replace.txt
"arn:aws:ecs:*:588068252125:cluster/${aws:PrincipalTag/Service}-*" \\\
"arn:aws:ecs:*:588068252125:task/${aws:PrincipalTag/Service}-*" \\\
"arn:aws:ecs:*:588068252125:container-instance/${aws:PrincipalTag/Service}-*" \\\
"arn:aws:ecs:*:588068252125:task-definition/${aws:PrincipalTag/Service}-*:*" \\\
"arn:aws:ecs:*:588068252125:service/${aws:PrincipalTag/Service}-*"
then you can use sed with # delimiters to make the replacement:
$ sed "s#\"<ARN>\"#$(< replace.txt)#g" input.txt
run aws iam simulate-custom-policy \
--resource-arns \
"arn:aws:ecs:*:588068252125:cluster/${aws:PrincipalTag/Service}-*" \
"arn:aws:ecs:*:588068252125:task/${aws:PrincipalTag/Service}-*" \
"arn:aws:ecs:*:588068252125:container-instance/${aws:PrincipalTag/Service}-*" \
"arn:aws:ecs:*:588068252125:task-definition/${aws:PrincipalTag/Service}-*:*" \
"arn:aws:ecs:*:588068252125:service/${aws:PrincipalTag/Service}-*"
--action-names \
"<ACTION_NAMES>"
Here $(< replace.txt) is equivalent to $(cat replace.txt)

Leveraging graphviz to create a network weathermap configuration

Given a generated list of nodes and links, is there a way I can use dot or some other tool from the graphviz package to create coordinates for those nodes such that I in turn can use that information to generate a configuration file for network weathermap?
The answer is simple, calling dot or the other tools without an output argument printed the information I wanted to stdout.
I wrote this shell script to make a graph from an mrtg config file, but decided to not pursue the weathermap part, due to the results being too cluttered;
grep -P '^SetEnv.*MRTG_INT_IP="..*" MRTG_INT_DESCR=".*"' $1 | grep -v 'MRTG_INT_IP="127.' | grep -v 'MRTG_INT_IP="10.255.' |\
sed \
-e 's/SetEnv\[\(.*\.switch\.hapro\.no_.*\)]: MRTG_INT_IP="\(.*\)" MRTG_INT_DESCR="\(.*\)"/\1 \2 \3/' \
-e 's/\//_/g' |\
sort -t/ -k 1 -n -k 2 -n -k 3 -n -k 4 |\
gawk '
BEGIN { print "graph '$2' {"; }
{
graph[overlap=false];
v = "'$2'"
print v " -- " $3
}
END { print "}" }'
Thought I would share this in case someone else found it useful in the future.
I used the script like ./mkconf ../switch/mrtg.1c.conf 1c | dot -Tpng > test.png

Resources