Java output as variables within same shell script - bash

I'm running JAVA code inside shell script
java -cp ojdbc6.jar:. javaClassName args
Is it possible to do command substitution for java output inside shell
Output of java code is an array:
[{ID:143},{Name:John},{Age:32},{Designation:Enginner},{City:Delhi},{Phone:+123 456 789},{Email:abc#gmai.com}]
I want to declare above array as variables inside the same shell-script where java code runs
ID=${ID}
Name=${Name}

Try
grep -oE '(:[^}]+)' | head -2 | tr -d ':'
Demo :
$read -r Id Name <<<$(echo '[{ID:143},{Name:John},{Age:32},{Designation:Enginner},{City:Delhi},{Phone:+123 456 789},{Email:abc#gmai.com}]' | grep -oE '(:[^}]+)' | head -2 | tr -d ':' )
$echo $Id
143
$echo $Name
John
$

Related

convert bash pipeline into a function with parameter

I have a pipeline I use to preview csv files:
cat file_name.csv | sed -e 's/,,/, ,/g' | column -t -s ","| less -s
But i want to create an alias viewcsv that will allow to just replace the filename.
I tried viewcsv="cat $1 | sed -e 's/,,/, ,/g' | column -t -s ","| less -s" but that didn't work. Googling turned up that I need to convert this pipeline to a function? How can i convert this to a function so that viewcsv file_name.csv will return same output as cat file_name.csv | sed -e 's/,,/, ,/g' | column -t -s ","| less -s does?
Function syntax looks like this:
viewcsv() {
sed -e 's/,,/, ,/g' "$1" | column -t -s ","| less -s
}
Notice that I have replaced cat "$1" | sed with sed "$1".
csvkit has a CSV previewer, by the way:
$ csvlook <<< $'a,b,c\n10,20,30'
| a | b | c |
| -- | -- | -- |
| 10 | 20 | 30 |

echo strings with envrionment variables from lines pulled from a file in bash

I have a file like so:
- ${VAR1}/blah/blah:/blah1
- ${VAR2}/blah/blah:/blah2
- $VAR3:/blah3
I ultimately need to create those three folders.
I am using sed to extract the folder part:
$ cat test.txt | grep -E '^ +- \$.*?:.*?$' | sed 's/.*- \(\$.*\):.*/\1/g'
${VAR1}/blah/blah
${VAR2}/blah/blah
$VAR3
I need to create those folders but I need those shell variables to expand. Right now they don't:
$ cat test.txt | grep -E '^ +- \$.*?:.*?$' | sed 's/.*- \(\$.*\):.*/\1/g' | while read line; do echo "$line"; done
${VAR1}/blah/blah
${VAR2}/blah/blah
$VAR3
Is there a way to get the expanded strings so I can run mkdir instead of echo to make the folders?
You may use this bash script with envsubst:
#!/usr/bin/env bash
export VAR1 VAR2 VAR3
while IFS=' -:' read -r _ d _; do
mkdir -p "$d"
done < <(envsubst < test.txt)
Alternatively use this envsubst + awk + xargs solution:
envsubst < text.txt |
awk -F '[-:[:blank:]]+' -v ORS='\0' '{print $2}' |
xargs -0 mkdir -p
First of all those variables should be exported to be accessible from your script. Then you could just use the cut and tr commands combination to extract dir name in a loop like the following:
#!/bin/bash -eu
while read -r LINE; do
echo "$LINE" | cut -d ':' -f 1 | tr -d ' ' | tr -d '-'
done < test.txt

GNU parallel with custom script doing string comparison

The follwoing script.sh compares part of a string (coming from stdin by cating a csv file) to a defined string and reports the differences in a certain format
#!/usr/bin/env bash
reference="ABCDEFG"
ref_transp=$(echo "$reference" | sed -e 's/\(.\)/\1\n/g')
while read line; do
line_transp=$(echo "$line" | cut -d',' -f2 | sed -e 's/\(.\)/\1\n/g')
output=$(paste -d ' ' <(echo "$ref_transp") <(echo "$line_transp") | grep -vnP '([A-Z]) \1' | sed -E 's/([0-9][0-9]*):([A-Z]) ([A-Z]*)/\2\1\3/' | grep '^[A-Z][0-9][0-9]*[A-Z*]$')
echo "$(echo ${line:0:35}, $output)"
done < "${1:-/dev/stdin}"
It is intendet to be executed on a number of rows from a very large file in the format
XYZ,ABMDEFG
and it works well when i use it in a pipe:
cat large_file | ./find_something.sh
However, when I try to use it with parallel, i get this error:
$ cat large_file | parallel ./find_something.sh
./find_something.sh: line 9: XYZ, ABMDEFG : No such file or directory
What is causing this? Is parallel supposed to work for something like this, if I want to redirect the output to a single file afterwards?
Less important side note: I'm rather proud of my string comparison method, but if someone has a faster way to get from comparing ABCDEFG and XYZ,ABMDEFG to obtain XYZ,C3M I'd be happy to hear that, too.
Edit:
I should have said, I also want to preserve the order of each line in the output, corresponding to the input. Is that possible using parallel?
Your script accepts its input from a file (defaulting to stdin), whereas parallel will pass input as arguments, not via stdin. In that sense, parallel is closer to xargs.
Presumably, you want each of the lines in large_file to be processed as a unit, possibly in parallel.
That means you need your script to only process one such line at a time, and let parallel call your script many times, once for each line.
So your script should look like this:
#!/usr/bin/env bash
reference="ABCDEFG"
ref_transp=$(echo "$reference" | sed -e 's/\(.\)/\1\n/g')
line="$1"
line_transp=$(echo "$line" | cut -d',' -f2 | sed -e 's/\(.\)/\1\n/g')
output=$(paste -d ' ' <(echo "$ref_transp") <(echo "$line_transp") | grep -vnP '([A-Z]) \1' | sed -E 's/([0-9][0-9]*):([A-Z]) ([A-Z]*)/\2\1\3/' | grep '^[A-Z][0-9][0-9]*[A-Z*]$')
echo "$(echo ${line:0:35}, $output)"
Then you can redirect to a file as follows:
cat large_file | parallel ./find_something.sh > output_file
-k keeps the order.
#!/usr/bin/env bash
doit() {
reference="ABCDEFG"
ref_transp=$(echo "$reference" | sed -e 's/\(.\)/\1\n/g')
while read line; do
line_transp=$(echo "$line" | cut -d',' -f2 | sed -e 's/\(.\)/\1\n/g')
output=$(paste -d ' ' <(echo "$ref_transp") <(echo "$line_transp") | grep -vnP '([A-Z]) \1' | sed -E 's/([0-9][0-9]*):([A-Z]) ([A-Z]*)/\2\1\3/' | grep '^[A-Z][0-9][0-9]*[A-Z*]$')
echo "$(echo ${line:0:35}, $output)"
done
}
export -f doit
cat large_file | parallel --pipe -k doit
#or
parallel --pipepart -a large_file --block -10 -k doit

working with "unclear" declared variables [duplicate]

This question already has answers here:
How do I set a variable to the output of a command in Bash?
(15 answers)
Closed 3 years ago.
I am trying to save the specific output from a piped command to a variable.
value= ping google.de -c 20 | grep -oe \/[0-9]. | head -n 1 | tr -d [\/] | tr -d "\n\r"
This saves the average ping to the variable "value".
However when I try to further process the variable e.g. in an echo line like:
echo "The Average ping is: $variable"
The output is
The Average ping is: $variable
Even when i try to pass the value to another Variable like:
value2= $value
the result is the same.
I read that variables in bash need to be declared in a certain way, may this be the problem in this specific case?
sh or bash:
value="`ping google.de -c 20 | grep -oe \/[0-9]. | head -n 1 | tr -d [\/] | tr -d "\n\r"`"
bash:
value="$(ping google.de -c 20 | grep -oe \/[0-9]. | head -n 1 | tr -d [\/] | tr -d "\n\r")"

Use each line of piped output as parameter for script

I have an application (myapp) that gives me a multiline output
result:
abc|myparam1|def
ghi|myparam2|jkl
mno|myparam3|pqr
stu|myparam4|vwx
With grep and sed I can get my parameters as below
myapp | grep '|' | sed -e 's/^[^|]*//' | sed -e 's/|.*//'
But then want these myparamx values as paramaters of a script to be executed for each parameter.
myscript.sh myparam1
myscript.sh myparam2
etc.
Any help greatly appreciated
Please see xargs. For example:
myapp | grep '|' | sed -e 's/^[^|]*//' | sed -e 's/|.*//' | xargs -n 1 myscript.sh
May be this can help -
myapp | awk -F"|" '{ print $2 }' | while read -r line; do /path/to/script/ "$line"; done
I like the xargs -n 1 solution from Dark Falcon, and while read is a classical tool for such kind of things, but just for completeness:
myapp | awk -F'|' '{print "myscript.sh", $2}' | bash
As a side note, speaking about extraction of 2nd field, you could use cut:
myapp | cut -d'|' -f 1 # -f 1 => second field, starting from 0

Resources