reducing commands with sed - bash

I'm just interested if it's possible to reduce this commands to one line without &&?
find /backup/daily.1/var/www/ -iname "*.jpg" -type f >> ~/backuppath.txt
sed 's|/backup/daily.1||g' ~/backuppath.txt > ~/wwwpath.txt
paste -d " " ~/backuppath.txt ~/wwwpath.txt > ~/files.txt
while read line; do cp $line; done < ~/files.txt

I wouldn't write it on one line, but you can do without the intermediate files:
find /backup/daily.1/var/www/ -iname "*.jpg" -type f |
sed 's%/backup/daily.1\(.*\)%cp & \1%' |
sh -x
The sed command splits the file names into two components, the /backup/daily.1 prefix and 'the rest', and replaces that with the complete copy command copying the original name to the name without the prefix. The output of sed is fed to the shell as a script.
This should work fine unless there's a file name that contains shell metacharacters, spaces or newlines. You can improve the resiliency if there won't be newlines or single quotes in the file names with:
find /backup/daily.1/var/www/ -iname "*.jpg" -type f |
sed "s%/backup/daily.1\(.*\)%cp '&' '\1'%" |
sh -x
This wraps each filename in single quotes.

This does not deal with filenames with spaces. (This is not important, I merely
state this to preempt the inevitable comments.)
find /backup/daily.1/var/www/ -iname "*.jpg" -type f |
while read name; do cp $name ${name#/backup/daily.1}; done
You can also just do:
find /backup/daily.1/var/www/ -iname "*.jpg" \
-type f -exec sh -c 'cp "$0" "${0#/backup/daily.1}"' {} \;
which handles unusual filename well.

find /backup/daily.1/var/www/ -iname "*.jpg" -type f \
| sed 's|^/backup/daily\.1\(.*\)$|\0 \1|' \
| ( while read origin dest; do cp "$origin" "$dest"; done)
In the sed expression :
\0 is replaced by the matched string, which is the whole line is this case
\1 is replaced by the subpattern match \(.*\), that is everything from after /backup/daily.1 up to the end of the line

Related

sed to replace string in file only displayed but not executed

I want to find all files with certain name (Myfile.txt) that do not contain certain string (my-wished-string) and then do a sed in order to do a replace in the found files. I tried with:
find . -type f -name "Myfile.txt" -exec grep -H -E -L "my-wished-string" {} + | sed 's/similar-to-my-wished-string/my-wished-string/'
But this only displays me all files with wished name that miss the "my-wished-string", but does not execute the replacement. Do I miss here something?
With a for loop and invoking a shell.
find . -type f -name "Myfile.txt" -exec sh -c '
for f; do
grep -H -E -L "my-wished-string" "$f" &&
sed -i "s/similar-to-my-wished-string/my-wished-string/" "$f"
done' sh {} +
You might want to add a -q to grep and -n to sed to silence the printing/output to stdout
You can do this by constructing two stacks; the first containing the files to search, and the second containing negative hits, which will then be iterated over to perform the replacement.
find . -type f -name "Myfile.txt" > stack1
while read -r line;
do
[ -z $(sed -n '/my-wished-string/p' "${line}") ] && echo "${line}" >> stack2
done < stack1
while read -r line;
do
sed -i "s/similar-to-my-wished-string/my-wished-string/" "${line}"
done < stack2
With some versions of sed, you can use -i to edit the file. But don't pipe the list of names to sed, just execute sed in the find:
find . -type f -name Myfile.txt -not -exec grep -q "my-wished-string" {} \; -exec sed -i 's/similar-to-my-wished-string/my-wished-string/g' {} \;
Note that any file which contains similar-to-my-wished-string also contains the string my-wished-string as a substring, so with these exact strings the command is a no-op, but I suppose your actual strings are different than these.

Exec and sed commands

The question is how to combine the following commands in one line and use exec.
find . -name '*.txt' -exec sh -c 'echo "$(sed -n "\$p" "$1"),$1"' _ {} \;
The result: path and name of all .txt files.
find . -name '*.txt' -exec sed -n '/stringA/,/stringB/p' {} \;
The result: lines between start and end parameters over all .txt files.
The requested result: give me lines between start and end parameters. The first line must be contain path and name of the .txt file.
find . -name '*.txt' -exec ???? {} \;
./alpha/file01.txt
stringA
line1
line2
stringB
./beta/file02.txt
stringA
line1
line2
stringB
Thanks.
T.
If the files are non-empty then all you need is:
find . -name '*.txt' -exec awk 'FNR==1{print FILENAME} /StringA/,/StringB/' {} +
If they can be empty then the simplest way to handle it is to use GNU awk for BEGINFILE:
find . -name '*.txt' -exec awk 'BEGINFILE{print FILENAME} /StringA/,/StringB/' {} +
It may be easier to pipe the output of find to other commands than using -exec. Please try the following:
find . -type f -name '*.txt' -print0 | while read -r -d "" f; do echo "$f"; sed -n "/stringA/,/stringB/p" "$f"; done
which yields:
./alpha/file01.txt
stringA
line1
line2
stringB
./beta/file02.txt
stringA
line1
line2
stringB
-print0 option in find merges the filenames delimited by a null character.
-d "" option in read splits the input by a null character to properly reproduce the list of filenames.
Then we can refer to $f as a filename in the while loop.
Hope this helps.
Using GNU sed and realpath:
sed -sn '1F;/stringA/,/stringB/p' $(realpath *.txt)
Or, if relative paths are acceptable:
sed -sn '1F;/stringA/,/stringB/p' ./*.txt
If the code needs to recurse into subdirs:
sed -sn '1F;/stringA/,/stringB/p' $(find . -name '*.txt')

Bash - file path can not be read for LAME encoder

How do I properly escape the path to come out of find to a new command argument?
#!/bin/bash
for f in $(find . -type f -name '*.flac')
do
if flac -cd "$f" | lame -bh 320 - "${f%.*}".mp3; then
rm -f "$f"
echo "removed $f"
fi
done
returns
lame: excess arg Island of the Gods - 3.mp3
Using a Bash for loop is not ideal for the results of find or ls. There are other ways to do it.
You may want to use -print0 and xargs to avoid word splitting issues.
$ find [path] -type f -name *.flac -print0 | xargs -0 [command line {xargs puts in fn}]
Or use -exec primary in find:
$ find [path] -type f -name *.flac -exec [process {find puts in fn}] \;
Alternative, you can use a while loop:
find [path] -type f -name *.flac | while IFS= read -r fn; do # fn not quoted here...
echo "$fn" # QUOTE fn here!
# body of your loop
done

Using awk to print ALL spaces within filenames which have a varied number of spaces

I'm executing the following using bash and awk to get the potentially space-full filename, colon, file size. (Column 5 contains the space delimited size, and 9 to EOL the file name):
src="Desktop"
echo "Constructing $src files list. `date`"
cat /dev/null > "$src"Files.txt
find -s ~/"$src" -type f -exec ls -l {} \; |
awk '{for(i=9;i<=NF;i++) {printf("%s", $i " ")} print ":" $5}' |
grep -v ".DS_Store" | grep -v "Icon\r" |
while read line ; do filespacesize=`basename "$line"`; filesize=`echo "$filespacesize" |
sed -e 's/ :/:/1'`
path=`dirname "$line"`; echo "$filesize:$path" >> "$src"Files.txt ;
done
And it works fine, BUT…
If a filename has > 1 space between parts, I only get 1 space between filename parts, and the colon, followed by the filesize.
How can I get the full filename, :, and then the file size?
It seems you want the following (provided your find handles the printf option with the %f, %s and %h modifiers):
src=Desktop
echo "Constructing $src files list. $(date)"
find ~/"$src" -type f -printf '%f:%s:%h\n' > "$src"Files.txt
Much shorter and much more efficient than your method!
This will not discard the .DS_STORE and Icon\r things… but I'm not really sure what you really want to discard. If you want to discard the .DS_STORE directory altogether:
find ~/"$src" -name '.DS_STORE' -type d -prune -o -type f -printf '%f:%s:%h\n' > "$src"Files.txt
#guido seems to have guessed what you mean by grep -v "Icon\r": ignore files ending with Icon; if this his guess is right, then this will do:
find ~/"$src" -name '.DS_STORE' -type d -prune -o ! -name '*Icon' -type f -printf '%f:%s:%h\n' > "$src"Files.txt

find -exec with multiple commands

I am trying to use find -exec with multiple commands without any success. Does anybody know if commands such as the following are possible?
find *.txt -exec echo "$(tail -1 '{}'),$(ls '{}')" \;
Basically, I am trying to print the last line of each txt file in the current directory and print at the end of the line, a comma followed by the filename.
find accepts multiple -exec portions to the command. For example:
find . -name "*.txt" -exec echo {} \; -exec grep banana {} \;
Note that in this case the second command will only run if the first one returns successfully, as mentioned by #Caleb. If you want both commands to run regardless of their success or failure, you could use this construct:
find . -name "*.txt" \( -exec echo {} \; -o -exec true \; \) -exec grep banana {} \;
find . -type d -exec sh -c "echo -n {}; echo -n ' x '; echo {}" \;
One of the following:
find *.txt -exec awk 'END {print $0 "," FILENAME}' {} \;
find *.txt -exec sh -c 'echo "$(tail -n 1 "$1"),$1"' _ {} \;
find *.txt -exec sh -c 'echo "$(sed -n "\$p" "$1"),$1"' _ {} \;
Another way is like this:
multiple_cmd() {
tail -n1 $1;
ls $1
};
export -f multiple_cmd;
find *.txt -exec bash -c 'multiple_cmd "$0"' {} \;
in one line
multiple_cmd() { tail -1 $1; ls $1 }; export -f multiple_cmd; find *.txt -exec bash -c 'multiple_cmd "$0"' {} \;
"multiple_cmd()" - is a function
"export -f multiple_cmd" - will export it so any other subshell can see it
"find *.txt -exec bash -c 'multiple_cmd "$0"' {} \;" - find that will execute the function on your example
In this way multiple_cmd can be as long and as complex, as you need.
Hope this helps.
There's an easier way:
find ... | while read -r file; do
echo "look at my $file, my $file is amazing";
done
Alternatively:
while read -r file; do
echo "look at my $file, my $file is amazing";
done <<< "$(find ...)"
Extending #Tinker's answer,
In my case, I needed to make a command | command | command inside the -exec to print both the filename and the found text in files containing a certain text.
I was able to do it with:
find . -name config -type f \( -exec grep "bitbucket" {} \; -a -exec echo {} \; \)
the result is:
url = git#bitbucket.org:a/a.git
./a/.git/config
url = git#bitbucket.org:b/b.git
./b/.git/config
url = git#bitbucket.org:c/c.git
./c/.git/config
I don't know if you can do this with find, but an alternate solution would be to create a shell script and to run this with find.
lastline.sh:
echo $(tail -1 $1),$1
Make the script executable
chmod +x lastline.sh
Use find:
find . -name "*.txt" -exec ./lastline.sh {} \;
Thanks to Camilo Martin, I was able to answer a related question:
What I wanted to do was
find ... -exec zcat {} | wc -l \;
which didn't work. However,
find ... | while read -r file; do echo "$file: `zcat $file | wc -l`"; done
does work, so thank you!
1st answer of Denis is the answer to resolve the trouble. But in fact it is no more a find with several commands in only one exec like the title suggest. To answer the one exec with several commands thing we will have to look for something else to resolv. Here is a example:
Keep last 10000 lines of .log files which has been modified in the last 7 days using 1 exec command using severals {} references
1) see what the command will do on which files:
find / -name "*.log" -a -type f -a -mtime -7 -exec sh -c "echo tail -10000 {} \> fictmp; echo cat fictmp \> {} " \;
2) Do it: (note no more "\>" but only ">" this is wanted)
find / -name "*.log" -a -type f -a -mtime -7 -exec sh -c "tail -10000 {} > fictmp; cat fictmp > {} ; rm fictmp" \;
I usually embed the find in a small for loop one liner, where the find is executed in a subcommand with $().
Your command would look like this then:
for f in $(find *.txt); do echo "$(tail -1 $f), $(ls $f)"; done
The good thing is that instead of {} you just use $f and instead of the -exec … you write all your commands between do and ; done.
Not sure what you actually want to do, but maybe something like this?
for f in $(find *.txt); do echo $f; tail -1 $f; ls -l $f; echo; done
should use xargs :)
find *.txt -type f -exec tail -1 {} \; | xargs -ICONSTANT echo $(pwd),CONSTANT
another one (working on osx)
find *.txt -type f -exec echo ,$(PWD) {} + -exec tail -1 {} + | tr ' ' '/'
A find+xargs answer.
The example below finds all .html files and creates a copy with the .BAK extension appended (e.g. 1.html > 1.html.BAK).
Single command with multiple placeholders
find . -iname "*.html" -print0 | xargs -0 -I {} cp -- "{}" "{}.BAK"
Multiple commands with multiple placeholders
find . -iname "*.html" -print0 | xargs -0 -I {} echo "cp -- {} {}.BAK ; echo {} >> /tmp/log.txt" | sh
# if you need to do anything bash-specific then pipe to bash instead of sh
This command will also work with files that start with a hyphen or contain spaces such as -my file.html thanks to parameter quoting and the -- after cp which signals to cp the end of parameters and the beginning of the actual file names.
-print0 pipes the results with null-byte terminators.
for xargs the -I {} parameter defines {} as the placeholder; you can use whichever placeholder you like; -0 indicates that input items are null-separated.
I found this solution (maybe it is already said in a comment, but I could not find any answer with this)
you can execute MULTIPLE COMMANDS in a row using "bash -c"
find . <SOMETHING> -exec bash -c "EXECUTE 1 && EXECUTE 2 ; EXECUTE 3" \;
in your case
find . -name "*.txt" -exec bash -c "tail -1 '{}' && ls '{}'" \;
i tested it with a test file:
[gek#tuffoserver tmp]$ ls *.txt
casualfile.txt
[gek#tuffoserver tmp]$ find . -name "*.txt" -exec bash -c "tail -1 '{}' && ls '{}'" \;
testonline1=some TEXT
./casualfile.txt
Here is my bash script that you can use to find multiple files and then process them all using a command.
Example of usage. This command applies a file linux command to each found file:
./finder.sh file fb2 txt
Finder script:
# Find files and process them using an external command.
# Usage:
# ./finder.sh ./processing_script.sh txt fb2 fb2.zip doc docx
counter=0
find_results=()
for ext in "${#:2}"
do
# #see https://stackoverflow.com/a/54561526/10452175
readarray -d '' ext_results < <(find . -type f -name "*.${ext}" -print0)
for file in "${ext_results[#]}"
do
counter=$((counter+1))
find_results+=("${file}")
echo ${counter}") ${file}"
done
done
countOfResults=$((counter))
echo -e "Found ${countOfResults} files.\n"
echo "Processing..."
counter=0
for file in "${find_results[#]}"
do
counter=$((counter+1))
echo -n ${counter}"/${countOfResults}) "
eval "$1 '${file}'"
done
echo "All files have been processed."

Resources