Unix. Call a variable inside another variable - shell

Currently I have a script like this. The intended purpose of this script is to use the function Getlastreport and retreive the name of lastest report in a folder. The folders name are typical a random generated number every night. I want to call the variable Getlastreport and put it inside Maxcashfunc.
Example :
Getlast report = 3473843.
Use MAXcashfunc grep -r "Max*" /David/reports/$Getlastreport[[the number 3473843 should be here ]]/"Moneyfromyesterday.csv" > Report`
Script:
#!bin/bash
Getlastreport()
{
cd /David/reports/ | ls -l -rt | tail -1 | cut -d' ' -f10-
}
MAXcashfunc()
{
grep -r "Max*" /David/reports/$Getlastreport/"Moneyfromyesterday.csv" > Report
}
##call maxcash func
MAXcashfunc

You can use:
MAXcashfunc() {
grep -r "Max" /David/reports/`Getlastreport`/"Moneyfromyesterday.csv" > Report
}
`Getlastreport` - Call Getlastreport and get its output.

If I follow your question, you could use
function Getlastreport() {
cd /David/reports/ | ls -l -rt | tail -1 | cut -d' ' -f10-
}
function MAXcashfunc() {
grep -r "Max" /David/reports/$(Getlastreport)/"Moneyfromyesterday.csv" > Report
}

Related

compgen not displaying all expected suggestions

I need to add some bash shell completion with words read from a json file:
...
{
'bundle': 'R20_B1002_ORDERSB1_FROMB1',
'version':'0.1',
'envs': ['DEV','QUAL','PREPROD2'],
},
{
'bundle': 'R201_QA069_ETIQETTENS_FROMSAP',
'version': '0.1',
'envs': ['DEV','QUAL','QUAL2','PREPROD'],
}
...
To get a words list I can run this command line and it returns all expected words from my file :
grep 'bundle' liste_routes.py | sed "s/'bundle': '//" | sed "s/',//" | grep -v '#'
for instance, with an addtional "grep R20" it returns :
R20_B1002_ORDERSB1_FROMB1
R201_QA069_ETIQETTENS_FROMSAP
R202_LOG287_LIVRAISONSORTANTE_FROMLSP
R203_PP052_FULLSTOCKSAP_FROMSAP
R204_CO062_PRIXTRANSF_FROMOLGA
R206_LOG419_NOMENCLBOMPROD_FROMTDX
R207_CERTIFNFGAZ
R208_SAL363_ARTICLEPRICING_FROMSAP
R209_LOG451_WHSCON_FROMTDX
Now I put this in this compgen file and source it i my bash session.
_find_routenames()
{
search="$cur"
grep 'bundle' liste_routes.py | sed "s/'bundle': '//" | sed "s/',//" | sed "s/\r//g" | grep -v '#' | awk '{$1=$1;print}'
}
_esbdeploy_completions()
{
#local IFS=$'\n'
COMPREPLY=()
cur="${COMP_WORDS[COMP_CWORD]}"
cur="${COMP_WORDS[1]}"
COMPREPLY=( $( compgen -W "$(_find_routenames)" -- "$cur" ) )
##### COMPREPLY=($(compgen -W "$(grep 'bundle' liste_routes.py | sed \"s/'bundle': '//\" | sed \"s/',//\" | grep -v '#')" -- "${COMP_WORDS}"))
}
complete -F _esbdeploy_completions d.py
complete -F _esbdeploy_completions deploy_karaf_v4.py
complete -F _esbdeploy_completions show.py
The problem is when I type
./d.py R20<TAB>
I get thoses suggestions :
R201_QA069_ETIQETTENS_FROMSAP R203_PP052_FULLSTOCKSAP_FROMSAP R206_LOG419_NOMENCLBOMPROD_FROMTDX R208_SAL363_ARTICLEPRICING_FROMSAP
R202_LOG287_LIVRAISONSORTANTE_FROMLSP R204_CO062_PRIXTRANSF_FROMOLGA R207_CERTIFNFGAZ R209_LOG451_WHSCON_FROMTDX
It misses R20_B1002_ORDERSB1_FROMB1 from my first grep test.
I don't think it deals with the underscore , as other tests with "./d.py R10" do suggests "R10_xxxx".

conditional grep from a Json file

i have a query with this Json and grep :-
[
{
"name":"Jon",
"id":123
},
{
"name":"Ray",
"id":1234
},
{
"name":"Abraham",
"id":12345
}
]
How can one extract name from this json where id matches say 1234 , can be random , using grep or sed?
I would suggest to use jq but if you want to use grep try
grep -B1 'id.*1234' < input_file | grep name
from man page
-B num, --before-context=num
Print num lines of leading context before each match. See also the -A and -C options.
please suggest the jq command
I take the liberty to fulfill the request.
jq -r '.[]|select(.id==1234).name' file
.[] - iterates the array elements
select(.id==1234) - filters element with desired id
.name - extracts name
The option -r causes the name to be written unquoted.

Using a pipe inside a bash function

Is there a way I can make this one liner here into a bash function?
mdfind -name autoflush.py | grep -Ev 'Library|VMWare|symf|larav' | sort
I tried to do it like this:
function mdf () { mdfind -name "$1" | grep -Ev 'Library|VMWare|symf|larav' | sort }
but didn't have success with it.
Can't I use the pipe operator inside functions in bash?
My next approach was this:
function mdf () {
result=mdfind -name "$1"
grepped_result=grep -Ev 'Library|VMWare|symf|larav' $result
sort $grepped_result # return sort $grepped_result ?
}
I am guessing there are many conceptional errors in my approach. So I would appreciate any help and input.
You're missing a semi-colon in the first attempt.
mdf() { mdfind -name "$1" | grep -Ev 'Library|VMWare|symf|larav' | sort; }
Just a quirk of shell syntax that you need it there. If you put the command on its own line then you don't need one.
mdf() {
mdfind -name "$1" | grep -Ev 'Library|VMWare|symf|larav' | sort
}
(I've removed the function keyword. For compatibility's sake you should write either func() or function func but not combine them.)
Give shellcheck.net a try the next time you're stuck. It's a syntax checker for shell scripts. A real godsend.

If xargs is map, what is filter?

I think of xargs as the map function of the UNIX shell. What is the filter function?
EDIT: it looks like I'll have to be a bit more explicit.
Let's say I have to hand a program which accepts a single string as a parameter and returns with an exit code of 0 or 1. This program will act as a predicate over the strings that it accepts.
For example, I might decide to interpret the string parameter as a filepath, and define the predicate to be "does this file exist". In this case, the program could be test -f, which, given a string, exits with 0 if the file exists, and 1 otherwise.
I also have to hand a stream of strings. For example, I might have a file ~/paths containing
/etc/apache2/apache2.conf
/foo/bar/baz
/etc/hosts
Now, I want to create a new file, ~/existing_paths, containing only those paths that exist on my filesystem. In my case, that would be
/etc/apache2/apache2.conf
/etc/hosts
I want to do this by reading in the ~/paths file, filtering those lines by the predicate test -f, and writing the output to ~/existing_paths. By analogy with xargs, this would look like:
cat ~/paths | xfilter test -f > ~/existing_paths
It is the hypothesized program xfilter that I am looking for:
xfilter COMMAND [ARG]...
Which, for each line L of its standard input, will call COMMAND [ARG]... L, and if the exit code is 0, it prints L, else it prints nothing.
To be clear, I am not looking for:
a way to filter a list of filepaths by existence. That was a specific example.
how to write such a program. I can do that.
I am looking for either:
a pre-existing implementation, like xargs, or
a clear explanation of why this doesn't exist
If map is xargs, filter is... still xargs.
Example: list files in the current directory and filter out non-executable files:
ls | xargs -I{} sh -c "test -x '{}' && echo '{}'"
This could be made handy trough a (non production-ready) function:
xfilter() {
xargs -I{} sh -c "$* '{}' && echo '{}'"
}
ls | xfilter test -x
Alternatively, you could use a parallel filter implementation via GNU Parallel:
ls | parallel "test -x '{}' && echo '{}'"
So, youre looking for the:
reduce( compare( filter( map(.. list()) ) ) )
what can be rewiritten as
list | map | filter | compare | reduce
The main power of bash is a pipelining, therefore isn't need to have one special filter and/or reduce command. In fact nearly all unix commands could act in one (or more) functions as:
list
map
filter
reduce
Imagine:
find mydir -type f -print | xargs grep -H '^[0-9]*$' | cut -d: -f 2 | sort -nr | head -1
^------list+filter------^ ^--------map-----------^ ^--filter--^ ^compare^ ^reduce^
Creating a test case:
mkdir ./testcase
cd ./testcase || exit 1
for i in {1..10}
do
strings -1 < /dev/random | head -1000 > file.$i.txt
done
mkdir emptydir
You will get a directory named testcase and in this directory 10 files and one directory
emptydir file.1.txt file.10.txt file.2.txt file.3.txt file.4.txt file.5.txt file.6.txt file.7.txt file.8.txt file.9.txt
each file contains 1000 lines of random strings some lines are contains only numbers
now run the command
find testcase -type f -print | xargs grep -H '^[0-9]*$' | cut -d: -f 2 | sort -nr | head -1
and you will get the largest number-only line from each files like: 42. (of course, this can be done more effectively, this is only for demo)
decomposed:
The find testcase -type f -print will print every plain files so, LIST (and reduced only to files). ouput:
testcase/file.1.txt
testcase/file.10.txt
testcase/file.2.txt
testcase/file.3.txt
testcase/file.4.txt
testcase/file.5.txt
testcase/file.6.txt
testcase/file.7.txt
testcase/file.8.txt
testcase/file.9.txt
the xargs grep -H '^[0-9]*$' as MAP will run a grep command for each file from a list. The grep is usually using as filter, e.g: command | grep, but now (with xargs) changes the input (filenames) to (lines containing only digits). Output, many lines like:
testcase/file.1.txt:1
testcase/file.1.txt:8
....
testcase/file.9.txt:4
testcase/file.9.txt:5
structure of lines: filename colon number, want only numbers so calling a pure filter, what strips out the filenames from each line cut -d: -f2. It outputs many lines like:
1
8
...
4
5
Now the reduce (getting the largest number), the sort -nr sorts all number numerically and reverse order (desc), so its output is like:
42
18
9
9
...
0
0
and the head -1 print the first line (the largest number).
Of course, you can write your own list/filter/map/reduce functions directly with bash programming constructions (loops, conditions and such), or you can employ any fullblown scripting language like perl, special languages like awk, sed "language", or dc (rpn) and such.
Having an special filter command such:
list | filter_command cut -d: -f 2
is simple doesn't needed, because you can use directly the
list | cut
You can have awk do the filter and reduce function.
Filter:
awk 'NR % 2 { $0 = $0 " [EVEN]" } 1'
Reduce:
awk '{ p = p + $0 } END { print p }'
I totally understand your question here as a long time functional programmer and here is the answer: Bash/unix command pipelining isn't as clean as you'd hoped.
In the example above:
find mydir -type f -print | xargs grep -H '^[0-9]*$' | cut -d: -f 2 | sort -nr | head -1
^------list+filter------^ ^--------map-----------^ ^--filter--^ ^compare^ ^reduce^
a more pure form would look like:
find mydir | xargs -L 1 bash -c 'test -f $1 && echo $1' _ | grep -H '^[0-9]*$' | cut -d: -f 2 | sort -nr | head -1
^---list--^^-------filter---------------------------------^^------map----------^^--map-------^ ^reduce^
But, for example, grep also has a filtering capability: grep -q mypattern which simply return 0 if it matches the pattern.
To get a something more like what you want, you simply would have to define a filter bash function and make sure to export it so it was compatible with xargs
But then you get into some problems. Like, test has binary and unary operators. How will your filter function handle this? Hand, what would you decide to output on true for these cases? Not insurmountable, but weird. Assuming only unary operations:
filter(){
while read -r LINE || [[ -n "${LINE}" ]]; do
eval "[[ ${LINE} $1 ]]" 2> /dev/null && echo "$LINE"
done
}
so you could do something like
seq 1 10 | filter "> 4"
5
6
7
8
9
As I wrote this I kinda liked it

Mounted volumes & bash in OSX

I'm working on a disk space monitor script in OSX and am struggling to first generate a list of volumes. I need this list to be generated dynamically as it changes over time; having this work properly would also make the script portable.
I'm using the following script snippet:
#!/bin/bash
PATH=/bin:/usr/bin:/sbin:/usr/sbin export PATH
FS=$(df -l | grep -v Mounted| awk ' { print $6 } ')
while IFS= read -r line
do
echo $line
done < "$FS"
Which generates:
test.sh: line 9: /
/Volumes/One-TB
/Volumes/pfile-archive-offsite-three-CLONE
/Volumes/ERDF-Files-Offsite-Backup
/Volumes/ESXF-Files-Offsite-Backup
/Volumes/ACON-Files-Offsite-Backup
/Volumes/LRDF-Files-Offsite-Backup
/Volumes/EPLK-Files-Offsite-Backup: No such file or directory
I need the script to generate output like this:
/
/Volumes/One-TB
/Volumes/pfile-archive-offsite-three-CLONE
/Volumes/ERDF-Files-Offsite-Backup
/Volumes/ESXF-Files-Offsite-Backup
/Volumes/ACON-Files-Offsite-Backup
/Volumes/LRDF-Files-Offsite-Backup
/Volumes/EPLK-Files-Offsite-Backup
Ideas, suggestions? Alternate or better methods of generating a list of mounted volumes are also welcome.
Thanks!
Dan
< is for reading from a file. You are not reading from a file but from a bash variable. So try using <<< instead of < on the last line.
Alternatively, you don't need to store the results in a variable, then read from the variable; you can directly read from the output of the pipeline, like this (I have created a function for neatness):
get_data() {
df -l | grep -v Mounted| awk ' { print $6 } '
}
get_data | while IFS= read -r line
do
echo $line
done
Finally, the loop doesn't do anything useful, so you can just get rid of it:
df -l | grep -v Mounted| awk ' { print $6 } '

Resources