Using a pipe inside a bash function - bash

Is there a way I can make this one liner here into a bash function?
mdfind -name autoflush.py | grep -Ev 'Library|VMWare|symf|larav' | sort
I tried to do it like this:
function mdf () { mdfind -name "$1" | grep -Ev 'Library|VMWare|symf|larav' | sort }
but didn't have success with it.
Can't I use the pipe operator inside functions in bash?
My next approach was this:
function mdf () {
result=mdfind -name "$1"
grepped_result=grep -Ev 'Library|VMWare|symf|larav' $result
sort $grepped_result # return sort $grepped_result ?
}
I am guessing there are many conceptional errors in my approach. So I would appreciate any help and input.

You're missing a semi-colon in the first attempt.
mdf() { mdfind -name "$1" | grep -Ev 'Library|VMWare|symf|larav' | sort; }
Just a quirk of shell syntax that you need it there. If you put the command on its own line then you don't need one.
mdf() {
mdfind -name "$1" | grep -Ev 'Library|VMWare|symf|larav' | sort
}
(I've removed the function keyword. For compatibility's sake you should write either func() or function func but not combine them.)
Give shellcheck.net a try the next time you're stuck. It's a syntax checker for shell scripts. A real godsend.

Related

Search for numbers in files and replace the match with "result -1"

I want to subtract 1 from every number before the character '=' in a list of files. For example, rename a string in a file such as "sometext.10.moretext" to "sometext.9.moretext".
I thought this might work:
grep -rl "[0-9]*" ./TEST | xargs sed -i "s/[0-9]*/&-1/g"
But it merely adds "-1" as a string after my numbers, so that the result is "sometext.10-1.moretext". I'm not really experienced with bash (and using it via windows), is there a way to do this? Powershell would also be an option.
edit
Input: some.text.10.text=some.other.text.10
Desired Output: some.text.9.text=some.other.text.10
Note: The actual number can be something from 1 to 9999.
File names have the following pattern: text#name#othername.config
You can use awk to achieve this :
I believe your words in strings are delimited by . and digits won't appear just before = otherwise this will fail as we are using only . as the delimiter.
awk 'BEGIN{FS=OFS="."} { for(i=1; i<=NF; i++) { if($i~"=")break; if($i~/^[0-9]+$/){$i=$i-1} }}1'
Input :
sometext.10.moretext=meow.10.meow
Output:
sometext.9.moretext=meow.10.meow
PowerShell
Get-ChildItem -Recurse | # List files
Select-String -Pattern '[0-9]' -List | # 'grep' files with numbers
Foreach-Object { # loop over those files
(Get-Content -LiteralPath $_.Path) | # read their content
ForEach-Object { # loop over the lines, do regex replace
[regex]::Replace($_, '[0-9]+', {param($match) ([int]$match.Value) - 1})
} | Set-Content -Path $_.Path -Encoding ASCII # output the lines
}
or in short form
gci -r|sls [0-9] -lis|% {(gc -lit $_.Path)|%{
[regex]::Replace($_,'[0-9]+',{"$args"-1})
}|sc $_.Path -enc ascii}
You can try
echo "sometext.10.moretext=meow.10.meow" |
sed -r 's/([^0-9]*)([0-9]*)(.*)/echo "\1$((\2-1))\3"/e'
Or changing files under TEST (see EDIT)
sed -ri 's/([^0-9]*)([0-9]*)(.*)/echo "\1$((\2-1))\3"/e' $(find ./TEST -type f)
EDIT
The find command will cause problems when filenames with spaces or newlines are encountered. So you should change the approach:
(Do not use grep -rlz "[0-9]*" ./TEST, that failed earlier)
find TEST -type f -print0 | xargs -0 sed -ri 's/([^0-9]*)([0-9]+)(.*)/echo "\1$((\2-1))\3"/e'
echo sometext.10.moretext=meow.10.meow| awk '{sub(/10/,"9")}1'
sometext.9.moretext=meow.10.meow

If xargs is map, what is filter?

I think of xargs as the map function of the UNIX shell. What is the filter function?
EDIT: it looks like I'll have to be a bit more explicit.
Let's say I have to hand a program which accepts a single string as a parameter and returns with an exit code of 0 or 1. This program will act as a predicate over the strings that it accepts.
For example, I might decide to interpret the string parameter as a filepath, and define the predicate to be "does this file exist". In this case, the program could be test -f, which, given a string, exits with 0 if the file exists, and 1 otherwise.
I also have to hand a stream of strings. For example, I might have a file ~/paths containing
/etc/apache2/apache2.conf
/foo/bar/baz
/etc/hosts
Now, I want to create a new file, ~/existing_paths, containing only those paths that exist on my filesystem. In my case, that would be
/etc/apache2/apache2.conf
/etc/hosts
I want to do this by reading in the ~/paths file, filtering those lines by the predicate test -f, and writing the output to ~/existing_paths. By analogy with xargs, this would look like:
cat ~/paths | xfilter test -f > ~/existing_paths
It is the hypothesized program xfilter that I am looking for:
xfilter COMMAND [ARG]...
Which, for each line L of its standard input, will call COMMAND [ARG]... L, and if the exit code is 0, it prints L, else it prints nothing.
To be clear, I am not looking for:
a way to filter a list of filepaths by existence. That was a specific example.
how to write such a program. I can do that.
I am looking for either:
a pre-existing implementation, like xargs, or
a clear explanation of why this doesn't exist
If map is xargs, filter is... still xargs.
Example: list files in the current directory and filter out non-executable files:
ls | xargs -I{} sh -c "test -x '{}' && echo '{}'"
This could be made handy trough a (non production-ready) function:
xfilter() {
xargs -I{} sh -c "$* '{}' && echo '{}'"
}
ls | xfilter test -x
Alternatively, you could use a parallel filter implementation via GNU Parallel:
ls | parallel "test -x '{}' && echo '{}'"
So, youre looking for the:
reduce( compare( filter( map(.. list()) ) ) )
what can be rewiritten as
list | map | filter | compare | reduce
The main power of bash is a pipelining, therefore isn't need to have one special filter and/or reduce command. In fact nearly all unix commands could act in one (or more) functions as:
list
map
filter
reduce
Imagine:
find mydir -type f -print | xargs grep -H '^[0-9]*$' | cut -d: -f 2 | sort -nr | head -1
^------list+filter------^ ^--------map-----------^ ^--filter--^ ^compare^ ^reduce^
Creating a test case:
mkdir ./testcase
cd ./testcase || exit 1
for i in {1..10}
do
strings -1 < /dev/random | head -1000 > file.$i.txt
done
mkdir emptydir
You will get a directory named testcase and in this directory 10 files and one directory
emptydir file.1.txt file.10.txt file.2.txt file.3.txt file.4.txt file.5.txt file.6.txt file.7.txt file.8.txt file.9.txt
each file contains 1000 lines of random strings some lines are contains only numbers
now run the command
find testcase -type f -print | xargs grep -H '^[0-9]*$' | cut -d: -f 2 | sort -nr | head -1
and you will get the largest number-only line from each files like: 42. (of course, this can be done more effectively, this is only for demo)
decomposed:
The find testcase -type f -print will print every plain files so, LIST (and reduced only to files). ouput:
testcase/file.1.txt
testcase/file.10.txt
testcase/file.2.txt
testcase/file.3.txt
testcase/file.4.txt
testcase/file.5.txt
testcase/file.6.txt
testcase/file.7.txt
testcase/file.8.txt
testcase/file.9.txt
the xargs grep -H '^[0-9]*$' as MAP will run a grep command for each file from a list. The grep is usually using as filter, e.g: command | grep, but now (with xargs) changes the input (filenames) to (lines containing only digits). Output, many lines like:
testcase/file.1.txt:1
testcase/file.1.txt:8
....
testcase/file.9.txt:4
testcase/file.9.txt:5
structure of lines: filename colon number, want only numbers so calling a pure filter, what strips out the filenames from each line cut -d: -f2. It outputs many lines like:
1
8
...
4
5
Now the reduce (getting the largest number), the sort -nr sorts all number numerically and reverse order (desc), so its output is like:
42
18
9
9
...
0
0
and the head -1 print the first line (the largest number).
Of course, you can write your own list/filter/map/reduce functions directly with bash programming constructions (loops, conditions and such), or you can employ any fullblown scripting language like perl, special languages like awk, sed "language", or dc (rpn) and such.
Having an special filter command such:
list | filter_command cut -d: -f 2
is simple doesn't needed, because you can use directly the
list | cut
You can have awk do the filter and reduce function.
Filter:
awk 'NR % 2 { $0 = $0 " [EVEN]" } 1'
Reduce:
awk '{ p = p + $0 } END { print p }'
I totally understand your question here as a long time functional programmer and here is the answer: Bash/unix command pipelining isn't as clean as you'd hoped.
In the example above:
find mydir -type f -print | xargs grep -H '^[0-9]*$' | cut -d: -f 2 | sort -nr | head -1
^------list+filter------^ ^--------map-----------^ ^--filter--^ ^compare^ ^reduce^
a more pure form would look like:
find mydir | xargs -L 1 bash -c 'test -f $1 && echo $1' _ | grep -H '^[0-9]*$' | cut -d: -f 2 | sort -nr | head -1
^---list--^^-------filter---------------------------------^^------map----------^^--map-------^ ^reduce^
But, for example, grep also has a filtering capability: grep -q mypattern which simply return 0 if it matches the pattern.
To get a something more like what you want, you simply would have to define a filter bash function and make sure to export it so it was compatible with xargs
But then you get into some problems. Like, test has binary and unary operators. How will your filter function handle this? Hand, what would you decide to output on true for these cases? Not insurmountable, but weird. Assuming only unary operations:
filter(){
while read -r LINE || [[ -n "${LINE}" ]]; do
eval "[[ ${LINE} $1 ]]" 2> /dev/null && echo "$LINE"
done
}
so you could do something like
seq 1 10 | filter "> 4"
5
6
7
8
9
As I wrote this I kinda liked it

Unix. Call a variable inside another variable

Currently I have a script like this. The intended purpose of this script is to use the function Getlastreport and retreive the name of lastest report in a folder. The folders name are typical a random generated number every night. I want to call the variable Getlastreport and put it inside Maxcashfunc.
Example :
Getlast report = 3473843.
Use MAXcashfunc grep -r "Max*" /David/reports/$Getlastreport[[the number 3473843 should be here ]]/"Moneyfromyesterday.csv" > Report`
Script:
#!bin/bash
Getlastreport()
{
cd /David/reports/ | ls -l -rt | tail -1 | cut -d' ' -f10-
}
MAXcashfunc()
{
grep -r "Max*" /David/reports/$Getlastreport/"Moneyfromyesterday.csv" > Report
}
##call maxcash func
MAXcashfunc
You can use:
MAXcashfunc() {
grep -r "Max" /David/reports/`Getlastreport`/"Moneyfromyesterday.csv" > Report
}
`Getlastreport` - Call Getlastreport and get its output.
If I follow your question, you could use
function Getlastreport() {
cd /David/reports/ | ls -l -rt | tail -1 | cut -d' ' -f10-
}
function MAXcashfunc() {
grep -r "Max" /David/reports/$(Getlastreport)/"Moneyfromyesterday.csv" > Report
}

Egrep over multiple lines in a bash script function

I have a function to check a file against multiple long strings seperated by pipes. I want to put each string in the script on a new line so it's more readable. I've tried adding backslashes to the end of each line but they're just caught within the grep statement.
my_function () {
sudo zcat my_file.gz |
egrep -c 'my_long_string_1 |
my_long_string_2 |
my_long_string_3'
}
This does output a result, however, it's incorrect.
Use egrep like this on multiple lines:
my_function () {
sudo zcat my_file.gz |
egrep -c "my_long_string_1|\
my_long_string_2|\
my_long_string_3"
}
Make sure there is no space before or after backslash and there is no space on new lines before each pattern of egrep.

How to compare to directory names and enter the one that is the highest version

I have two version of tcpdump in the same subdirectory.
tcpdump-4.1.1 and tcpdump-4.3.0
How can I write a bash function to return the highest version?
Edit:
I've got it working now. Here's the code.
#!/bin/bash
# Function to get the latest version of the directory
function getLatestDirVer {
latestDIR=$(ls -v $1* | tail -n 1)
stringLen=`expr length "$latestDIR"`
stringLen=$(($stringLen-1))
latestDIR2=`expr substr $latestDIR 1 $stringLen`
echo $latestDIR2
}
# Main function
echo $(getLatestDirVer tcpdump)
Here's the ouptut
[luke#machine Desktop]$ ./latestDIRversion.sh
tcpdump-4.3.0
The tcpdump-4.1.1 and tcpdump-4.3.0 directories are in the Desktop directory.
Here's one way using ls. You can use the -v flag to sort by version numbers in filename lowest to highest:
ls -v tcpdump* | tail -n 1
EDIT:
So it turns out, I completely mis-read your question. I thought you were interested in the filenames, but you're actually interested in the directories. You can add the following to your ~/.bashrc, I think it will work for you:
getLatestDirVer () {
for i in $(find ./* -type d | sort --version-sort); do :;done
cd "$i"
}
ls -1 tcpdump*|sort -rn|head -1

Resources