How to make variable infront of pipeline in bash? - bash

I would like to save output of first command as variable in front of pipeline and send them to the pipe, too.
For example: find -type d | grep -E '^\./y'. And in my variable going to be output of find -type d.
Thanks for help
EDIT
Maybe I can solve this problem another way, but I am standing in front of another problem. How to call my own function with parameter from pipeline?
EX: find -type d | MyFunction

RE:
EDIT Maybe I can solve this problem another way, but I am standing in front of another problem. How to call my own function with parameter from pipeline?
EX: find -type d | MyFunction
The following all work:
$ cat ./blah.sh
#!/bin/bash
function blah {
while read i; do
echo $i
done
}
find ~/opt -type d | blah
blah <<< $(find ~/opt -type d)
blah < <(find ~/opt -type d)
$ ./blah.sh
/home/me/opt
/home/me/opt/bin
/home/me/opt /home/me/opt/bin
/home/me/opt
/home/me/opt/bin
So I'd imagine if find -type d | MyFunction doesn't work, then the function is probably not looking for input on stdin.

Based on Andrew's comments to Perry, perhaps
find . -type d -print0 | while IFS= read -r -d "" dir; do
# do something with $dir
case "$dir" in
./y*) echo "$dir" ;;
*) : ;;
esac
done

You can easily capture the output of any command in a variable using the $() (used to be `) syntax, like this:
VARIABLE=$(command)
You could then just pipe the output of "echo $VARIABLE" into the next command.
However, please keep in mind that the length of the values of shell variables is restricted and not guaranteed to be large enough to hold arbitrary values -- in general, it isn't a good idea to try what you are attempting.

You can feed variables like this in a array with bash, without any loop :
$ read -a array <<< $(find 2>/dev/null -type d | grep -E 'test_[0-9]+')
$ echo ${array[#]}
./test_003.t ./test_002.t ./test_001.t
$ echo ${array[1]}
./test_002.t

Related

How to concatenate a list of folder paths within a variable that have spaces in them in shell script [duplicate]

I want to iterate over a list of files. This list is the result of a find command, so I came up with:
getlist() {
for f in $(find . -iname "foo*")
do
echo "File found: $f"
# do something useful
done
}
It's fine except if a file has spaces in its name:
$ ls
foo_bar_baz.txt
foo bar baz.txt
$ getlist
File found: foo_bar_baz.txt
File found: foo
File found: bar
File found: baz.txt
What can I do to avoid the split on spaces?
You could replace the word-based iteration with a line-based one:
find . -iname "foo*" | while read f
do
# ... loop body
done
There are several workable ways to accomplish this.
If you wanted to stick closely to your original version it could be done this way:
getlist() {
IFS=$'\n'
for file in $(find . -iname 'foo*') ; do
printf 'File found: %s\n' "$file"
done
}
This will still fail if file names have literal newlines in them, but spaces will not break it.
However, messing with IFS isn't necessary. Here's my preferred way to do this:
getlist() {
while IFS= read -d $'\0' -r file ; do
printf 'File found: %s\n' "$file"
done < <(find . -iname 'foo*' -print0)
}
If you find the < <(command) syntax unfamiliar you should read about process substitution. The advantage of this over for file in $(find ...) is that files with spaces, newlines and other characters are correctly handled. This works because find with -print0 will use a null (aka \0) as the terminator for each file name and, unlike newline, null is not a legal character in a file name.
The advantage to this over the nearly-equivalent version
getlist() {
find . -iname 'foo*' -print0 | while read -d $'\0' -r file ; do
printf 'File found: %s\n' "$file"
done
}
Is that any variable assignment in the body of the while loop is preserved. That is, if you pipe to while as above then the body of the while is in a subshell which may not be what you want.
The advantage of the process substitution version over find ... -print0 | xargs -0 is minimal: The xargs version is fine if all you need is to print a line or perform a single operation on the file, but if you need to perform multiple steps the loop version is easier.
EDIT: Here's a nice test script so you can get an idea of the difference between different attempts at solving this problem
#!/usr/bin/env bash
dir=/tmp/getlist.test/
mkdir -p "$dir"
cd "$dir"
touch 'file not starting foo' foo foobar barfoo 'foo with spaces'\
'foo with'$'\n'newline 'foo with trailing whitespace '
# while with process substitution, null terminated, empty IFS
getlist0() {
while IFS= read -d $'\0' -r file ; do
printf 'File found: '"'%s'"'\n' "$file"
done < <(find . -iname 'foo*' -print0)
}
# while with process substitution, null terminated, default IFS
getlist1() {
while read -d $'\0' -r file ; do
printf 'File found: '"'%s'"'\n' "$file"
done < <(find . -iname 'foo*' -print0)
}
# pipe to while, newline terminated
getlist2() {
find . -iname 'foo*' | while read -r file ; do
printf 'File found: '"'%s'"'\n' "$file"
done
}
# pipe to while, null terminated
getlist3() {
find . -iname 'foo*' -print0 | while read -d $'\0' -r file ; do
printf 'File found: '"'%s'"'\n' "$file"
done
}
# for loop over subshell results, newline terminated, default IFS
getlist4() {
for file in "$(find . -iname 'foo*')" ; do
printf 'File found: '"'%s'"'\n' "$file"
done
}
# for loop over subshell results, newline terminated, newline IFS
getlist5() {
IFS=$'\n'
for file in $(find . -iname 'foo*') ; do
printf 'File found: '"'%s'"'\n' "$file"
done
}
# see how they run
for n in {0..5} ; do
printf '\n\ngetlist%d:\n' $n
eval getlist$n
done
rm -rf "$dir"
There is also a very simple solution: rely on bash globbing
$ mkdir test
$ cd test
$ touch "stupid file1"
$ touch "stupid file2"
$ touch "stupid file 3"
$ ls
stupid file 3 stupid file1 stupid file2
$ for file in *; do echo "file: '${file}'"; done
file: 'stupid file 3'
file: 'stupid file1'
file: 'stupid file2'
Note that I am not sure this behavior is the default one but I don't see any special setting in my shopt so I would go and say that it should be "safe" (tested on osx and ubuntu).
find . -iname "foo*" -print0 | xargs -L1 -0 echo "File found:"
find . -name "fo*" -print0 | xargs -0 ls -l
See man xargs.
Since you aren't doing any other type of filtering with find, you can use the following as of bash 4.0:
shopt -s globstar
getlist() {
for f in **/foo*
do
echo "File found: $f"
# do something useful
done
}
The **/ will match zero or more directories, so the full pattern will match foo* in the current directory or any subdirectories.
I really like for loops and array iteration, so I figure I will add this answer to the mix...
I also liked marchelbling's stupid file example. :)
$ mkdir test
$ cd test
$ touch "stupid file1"
$ touch "stupid file2"
$ touch "stupid file 3"
Inside the test directory:
readarray -t arr <<< "`ls -A1`"
This adds each file listing line into a bash array named arr with any trailing newline removed.
Let's say we want to give these files better names...
for i in ${!arr[#]}
do
newname=`echo "${arr[$i]}" | sed 's/stupid/smarter/; s/ */_/g'`;
mv "${arr[$i]}" "$newname"
done
${!arr[#]} expands to 0 1 2 so "${arr[$i]}" is the ith element of the array. The quotes around the variables are important to preserve the spaces.
The result is three renamed files:
$ ls -1
smarter_file1
smarter_file2
smarter_file_3
find has an -exec argument that loops over the find results and executes an arbitrary command. For example:
find . -iname "foo*" -exec echo "File found: {}" \;
Here {} represents the found files, and wrapping it in "" allows for the resultant shell command to deal with spaces in the file name.
In many cases you can replace that last \; (which starts a new command) with a \+, which will put multiple files in the one command (not necessarily all of them at once though, see man find for more details).
I recently had to deal with a similar case, and I built a FILES array to iterate over the filenames:
eval FILES=($(find . -iname "foo*" -printf '"%p" '))
The idea here is to surround each filename with double quotes, separate them with spaces and use the result to initialize the FILES array.
The use of eval is necessary to evaluate the double quotes in the find output correctly for the array initialization.
To iterate over the files, just do:
for f in "${FILES[#]}"; do
# Do something with $f
done
In some cases, here if you just need to copy or move a list of files, you could pipe that list to awk as well.
Important the \"" "\" around the field $0 (in short your files, one line-list = one file).
find . -iname "foo*" | awk '{print "mv \""$0"\" ./MyDir2" | "sh" }'
Ok - my first post on Stack Overflow!
Though my problems with this have always been in csh not bash the solution I present will, I'm sure, work in both. The issue is with the shell's interpretation of the "ls" returns. We can remove "ls" from the problem by simply using the shell expansion of the * wildcard - but this gives a "no match" error if there are no files in the current (or specified folder) - to get around this we simply extend the expansion to include dot-files thus: * .* - this will always yield results since the files . and .. will always be present. So in csh we can use this construct ...
foreach file (* .*)
echo $file
end
if you want to filter out the standard dot-files then that is easy enough ...
foreach file (* .*)
if ("$file" == .) continue
if ("file" == ..) continue
echo $file
end
The code in the first post on this thread would be written thus:-
getlist() {
for f in $(* .*)
do
echo "File found: $f"
# do something useful
done
}
Hope this helps!
Another solution for job...
Goal was :
select/filter filenames recursively in directories
handle each names (whatever space in path...)
#!/bin/bash -e
## #Trick in order handle File with space in their path...
OLD_IFS=${IFS}
IFS=$'\n'
files=($(find ${INPUT_DIR} -type f -name "*.md"))
for filename in ${files[*]}
do
# do your stuff
# ....
done
IFS=${OLD_IFS}

Loop over find result in bash

I have a bash script written by some previous colleague in my company. It's shellcheck result is horrible and me, who is using zsh can't run the script. He seems to use the notorious find with for loop thingy in bash. But I can't figure out how to get it better.
At the moment i got a temporary fix.
this is his code
#!/bin/bash
releases=$(for d in $(find ${DELIVERIES} -maxdepth 1 -type d -name "*_delivery_33_SR*" | sort) ; do echo ${d##*_} ; done)
for sr in ${releases[#]}
do
echo "Release $sr"
deliveries=$(find ${deliveries_path}/*${sr}/ -type f -name "*.ear" -o -name "*.war" | sort)
if [ ! -e ${sr}.txt ]
then
for d in ${deliveries[#]}
do
echo "$(basename $d)" | tee -a ${sr}.txt
done
fi
echo
done
And this is my code that get to even loop the first part.
#!/bin/bash
for release in $(for d in $(find "${DELIVERIES}" -maxdepth 1 -type d -name "*_delivery_33_SR*" | sort) ; do echo "${d##*_}" ; done)
do
echo "Release $release"
done
As you can see I needed to put the find inside the loop and I cant save it in an variable, because when i try to loop over it will try to put \n everywhere and it is like a single element? Could any1 suggest How should I solve this problem, because this previous colleague uses this kind of find search a lot.
EDIT:
The script went to each folder with a specific name and then created a file X.X.X.txt with the version number in the X part. And appended the filenames inside the subfolder to the X.X.X.txt
Blindly refactoring gets me something like
#!/bin/bash
for d in "$DELIVERIES"/*_delivery_33_SR*/; do
sr=${d##*_}
echo "Release $sr"
if [ ! -e "${sr}.txt" ]
then
find "${deliveries_path}"/*"${sr}"/ -type f -name "*.ear" -o -name "*.war" |
sort |
xargs -n 1 basename |
tee -a "$sr.txt"
fi
echo
done

How to get list of certain strings in a list of files using bash?

The title is maybe not really descriptive, but I couldn't find a more concise way to describe the problem.
I have a directory containing different files which have a name that e.g. looks like this:
{some text}2019Q2{some text}.pdf
So the filenames have somewhere in the name a year followed by a capital Q and then another number. The other text can be anything, but it won't contain anything matching the format year-Q-number. There will also be no numbers directly before or after this format.
I can work something out to get this from one filename, but I actually need a 'list' so I can do a for-loop over this in bash.
So, if my directory contains the files:
costumerA_2019Q2_something.pdf
costumerB_2019Q2_something.pdf
costumerA_2019Q3_something.pdf
costumerB_2019Q3_something.pdf
costumerC_2019Q3_something.pdf
costumerA_2020Q1_something.pdf
costumerD2020Q2something.pdf
I want a for loop that goes over 2019Q2, 2019Q3, 2020Q1, and 2020Q2.
EDIT:
This is what I have so far. It is able to extract the substrings, but it still has doubles. Since I'm already in the loop and I don't see how I can remove the doubles.
find original/*.pdf -type f -print0 | while IFS= read -r -d '' line; do
echo $line | grep -oP '[0-9]{4}Q[0-9]'
done
# list all _filanames_ that end with .pdf from the folder original
find original -maxdepth 1 -name '*.pdf' -type f -print "%p\n" |
# extract the pattern
sed 's/.*\([0-9]{4}Q[0-9]\).*/\1/' |
# iterate
while IFS= read -r file; do
echo "$file"
done
I used -print %p to print just the filename, instead of full path. The GNU sed has -z option that you can use with -print0 (or -print "%p\0").
With how you have wanted to do this, if your files have no newline in the name, there is no need to loop over list in bash (as a rule of a thumb, try to avoid while read line, it's very slow):
find original -maxdepth 1 -name '*.pdf' -type f | grep -oP '[0-9]{4}Q[0-9]'
or with a zero seprated stream:
find original -maxdepth 1 -name '*.pdf' -type f -print0 |
grep -zoP '[0-9]{4}Q[0-9]' | tr '\0' '\n'
If you want to remove duplicate elements from the list, pipe it to sort -u.
Try this, in bash:
~ > $ ls
costumerA_2019Q2_something.pdf costumerB_2019Q2_something.pdf
costumerA_2019Q3_something.pdf other.pdf
costumerA_2020Q1_something.pdf someother.file.txt
~ > $ for x in `(ls)`; do [[ ${x} =~ [0-9]Q[1-4] ]] && echo $x; done;
costumerA_2019Q2_something.pdf
costumerA_2019Q3_something.pdf
costumerA_2020Q1_something.pdf
costumerB_2019Q2_something.pdf
~ > $ (for x in *; do [[ ${x} =~ ([0-9]{4}Q[1-4]).+pdf ]] && echo ${BASH_REMATCH[1]}; done;) | sort -u
2019Q2
2019Q3
2020Q1

getting the output of a grep command in a loop

I have a shell script that includes this search:
find . -type f -exec grep -iPho "barh(li|mar|ag)" {} \;
I want to capture each string the grep command finds and send it a function I will create named "parser"
parser(){
# do stuff with each single grep result found
}
how can that be done?
is this right?
find . -type f -exec grep -iPho "barh(li|mar|ag)" {parser $1} \;
I do not want to output the entire find command result to the function
Only shell can execute a function. You need to use bash -c in your find in order to execute it. That is also the reason you need to export your function, so that the new process sees it.
parser() {
while IFS= read -r line; do
echo "Processing line: $line"
done <<< "$1"
}
export -f parser
find . -type f -exec bash -c 'parser "$(grep -iPho "barh(li|mar|ag)" "$1")"' -- {} \;
The code above will send all occurrences from file1, then file2 etc to your function to process. It will not send each line one by one and therefore you need to loop over the lines in your function. If there is no occurrence of your regex in a file, it will still call your function with an empty input!
That might not be the best solution for you so let's try to add the loop inside the bash -c statement and really process the lines one by one:
parser() {
echo "Processing line: $1"
}
export -f parser
find . -type f -exec bash -c 'grep -iPho "barh(li|mar|ag)" "$#" | while IFS= read -r line; do parser "$line"; done' -- {} +
EDIT: Very nice and simple solution not using bash -c suggested by #gniourf_gniourf:
parser() {
echo "Processing line: $1"
}
find . -type f -exec grep -iPho "barh(li|mar|ag)" {} + | while IFS= read -r line; do parser "$line"; done
This approach works fine and it will process each line one by one. You also do not need to export your function with this approach. But you have to care for some things that might surprise you.
Each command in a pipeline is executed in its own subshell, and any variable assignment in your parser function or your while in general will be lost after returning from that very subshell. If you are writing a script, simple shopt -s lastpipe will suffice and run the last pipe command in the current shell environment. Or you can use process substitution:
parser() {
echo "Processing line: $1"
}
while IFS= read -r line; do
parser "$line";
done < <(find . -type f -exec grep -iPho "barh(li|mar|ag)" {} +)
Note that in the previous bash -c examples, you will experience the same behavior and your variable assignments will be lost as well.
You need to export your function.
You also need to call bash to execute the function.
parser() {
echo "GOT: $1"
}
export -f parser
find Projects/ -type f -name '*rb' -exec bash -c 'parser "$0"' {} \;
i suggest you to use sed ,this is more powerful tool to do text processing.
for example i want to add string "myparse" after the line that end as "ha",i can do this like
# echo "haha" > text1
# echo "hehe" > text2
# echo "heha" > text3
# find . -type f -exec sed '/ha$/s/ha$/ha myparse/' {} \;
haha myparse
heha myparse
hehe
if you really want to replace the file ,not just print to stdout,you can do this like
# find . -type f -exec sed -i '/ha$/s/ha$/ha myparse/' {} \;

how do i return all the paths after set point using bash

i am returning this:
./nas/cdn/catalog/swatches
./nas/cdn/catalog/product_shots
./nas/cdn/catalog/product_shots/high_res
./nas/cdn/catalog/product_shots/high_res/back
./nas/cdn/catalog/product_shots/high_res/front
./nas/cdn/catalog/product_shots/low_res
./nas/cdn/catalog/product_shots/low_res/back
./nas/cdn/catalog/product_shots/low_res/front
./nas/cdn/catalog/product_shots/thumbs
./nas/cdn/catalog/full_length
./nas/cdn/catalog/full_length/high_res
./nas/cdn/catalog/full_length/low_res
./nas/cdn/catalog/cropped
./nas/cdn/catalog/drawings
what is the correct way to remove ./nas/cdn/catalog/ from this?
this is the code i have, so far
BASE='./nas/cdn/catalog'
echo $BASE
for d in $(find . -type d -regex "${BASE}/[^.]*")
do
echo $(basename $d)
done
bit this just returns the last folder, i like to return /swatches, /product_shots/high_res etc...
Use sed like below,
BASE='./nas/cdn/catalog'
echo $BASE
for d in $(find . -type d -regex "${BASE}/[^.]*")
do
sed 's~^\([^/]*/\)\{4\}~~' <<< "$d"
done
Example:
$ var="./nas/cdn/catalog/drawings"
$ sed 's~^\([^/]*/\)\{4\}~~' <<< "$var"
drawings
A somewhat simpler approach:
BASE='./nas/cdn/catalog'
echo "$BASE"
( cd "$BASE" ; find */ -type d )
Note: this is not perfectly robust; it will fail when any of the directories immediately inside in $BASE starts with a hyphen. It should only be used when you can guarantee that that is not the case.

Resources