Replacing a part of file path in exec - bash

I would like to replace the part of each file path, which will be find by find linux command.
My approach is attached below:
find . -type f -name "*.txt" -exec echo {} | sed "s/f/u/g" {} \;
I expect the replacement of each letter "f" with "u" in file path. Unfortunately I got this error:
find: missing argument to `-exec'
sed: can't read {}: No such file or directory
sed: can't read ;: No such file or directory
What I did wrong? Thank you for your help.

I would like to replace the part of each file path
If you want to change just the file names/paths then use:
find . -type f -name "*.txt" -exec bash -c 'echo "$1" | sed "s/f/u/g"' - {} \;
or a bit more efficient with xargs (since it avoids spawning subshell for each found file):
find . -type f -name "*.txt" -print0 |
xargs -0 bash -c 'for f; do sed "s/f/u/g" <<< "$f"; done'

find . -type f -name "*.txt" | while read files
do
newname=$(echo "${files}" | sed s"#f#u#"g)
mv -v "${files}" "${newname}"
done
I don't completely understand what you meant by file path. If you weren't talking about the file name, please clarify further.

Related

Show directory path with only files present in them

This is the folder structure that I have.
Using the find command find . -type d in root folder gives me the following result
Result
./folder1
./folder1/folder2
./folder1/folder2/folder3
However, I want the result to be only ./folder1/folder2/folder3. i.e only print the result if there's a file of type .txt present inside.
Can someone help with this scenario? Hope it makes sense.
find . -type f -name '*.txt' |
sed 's=/[^/]*\.txt$==' |
sort -u
Find all .txt files, remove file names with sed to get the parent directories only, then sort -u to remove duplicates.
This won’t work on file names/paths that contain a new line.
You may use this find command that finds all the *.txt files and then it gets unique their parent directory names:
find . -type f -name '*.txt' -exec bash -c '
for f; do
f="${f#.}"
printf "%s\0" "$PWD${f%/*}"
done
' _ {} + | awk -v RS='\0' '!seen[$0]++'
We are using printf "%s\0" to address directory names with newlines, spaces and glob characters.
Using gnu-awk to get only unique directory names printed
Using Associative array and Process Substitution.
#!/usr/bin/env bash
declare -A uniq_path
while IFS= read -rd '' files; do
path_name=${files%/*}
if ((!uniq_path["$path_name"]++)); then
printf '%s\n' "$path_name"
fi
done < <(find . -type f -name '*.txt' -print0)
Check the value of uniq_path
declare -p uniq_path
Maybe this POSIX one?
find root -type f -name '*.txt' -exec dirname {} \; | awk '!seen[$0]++'
* adds a trailing \n after each directory path
* breaks when a directory in a path has a \n in its name
Or this BSD/GNU one?
find root -type f -name '*.txt' -exec dirname {} \; -exec printf '\0' \; | sort -z -u
* adds a trailing \n\0 after each directory path

linux find more than one -exec

find . -iname "*.txt" -exec program '{}' \; | sed 's/Value= //'
-"program" returns a different value for each file, and the output is prefixed with "Value= "
In this time the output will be "Value= 128" and the after sed just 128.
How can I take just the value "128" and have the input file be renamed to 128.txt
but also have this find run thought multiple files.
sorry for bad descriptions.
I will try to clear if needed
First write a shell script capable of renaming an argument:
mv "$1" "$(program "$1" | sed "s/Value= //").txt"
Then embed that script in your find command:
find . -iname "*.txt" \
-exec sh -c 'mv "$1" "$(program "$1" | sed "s/Value= //").txt"' _ {} \;

Shell Script Mac: Loop through Directory and Remove String

I am trying to loop my project folder recursively, grep every PHP file and find any string that matches xdebug_break();.
Then I want to remove that xdebug_break() (I will accept replacing with a space as well).
Here's what I got so far:
#!/bin/bash
FILES=$(find ../../Dev/projects/api -type f -name *.php)
for f in $FILES
do
if grep -nr "xdebug_break();" $f
then
sed -e '/xdebug_break();/d' -i $f
echo "xdebug_break(); has been deleted."
fi
done
Everything works, except the replace part. I keep getting this error:
sed: -i may not be used with stdin
I do not care if its sed or awk or whatever (but I do use a mac).
Thanks,
SOLUTION (FOR FUTURE READERS)
Thanks for the help everyone (ESP #anubhava). This one line trick did it for me:
find ../../Dev/projects/api -type f -name "*.php" -exec sed -i '' '/xdebug_break();/d' {} +
Also you can do it by loop (if you really really want to) like this:
#!/bin/bash
FILES=$(find ../../Dev/projects/api -type f -name *.php)
for f in $FILES
do
if grep -nr "xdebug_break();" $f
then
sed -i '' '/xdebug_break();/d' "$f"
echo "xdebug_break(); has been deleted."
fi
done
On OSX, your sed command should be:
sed -i.bak '/xdebug_break();/d' "$f"
Here .bak is the name of extension to create a backup of input file in inline editing.
You can avoid loop and do it in one find like this:
find ../../Dev/projects/api -type f -name "*.php" \
-exec sed -i.bak '/xdebug_break();/d' {} +
You can combine find and sed:
find Dev/projects/api -type f -name '*.php' -exec sed -i.bak '/xdebug_break();/d' {} \;

Run script against all txt files in directory and sub directories - BASH

What im trying to do is something along the lines of(this is pseudocode):
for txt in $(some fancy command ./*.txt); do
some command here $txt
You can use find:
find /path -type f -name "*.txt" | while read txt; do
echo "$txt"; # Do something else
done
Use the -exec option to find:
find /usr/share/wordlists/*/* -type f -name '*.txt' -exec yourScript {} \;
Try
find . | grep ".txt" | xargs -I script.sh {}
find returns all files in the directory. grep selects only .txt files and xargs sends the file as Parameter to script.sh

Bash script to change file names recursively

I have a script for changing file names of mht files but it does not traverse through dirs and sub dirs. I asked a question on a local forum and I got an answer that this is a solution:
find . -type f -name "*.mhtml" -o -type f -name "*.mht" | xargs -I item sh -c '{ echo item; echo item | sed "s/[:?|]//g"; }' | xargs -n2 mv
But it generates an error. With some of my experimenting it turns out that sh -c breaks file names with space and that this generates an error. How can I fix this?
#!/bin/bash
# renames.sh
# basic file renamer
for i in . *.mht
do
j=`echo $i | sed 's/|/ /g' | sed 's/:/ /g' | sed 's/?//g' | sed 's/"//g'`
mv "$i" "$j"
done
#! /bin/bash
find . -type f \( -name "*.mhtml" -o -name ".mht" \) -print0 |
while IFS= read -r -d '' source; do
target="${source//[:?|]/}"
[ "X$source" != "X$target" ] &&
mv -nv "$source" "$target"
done
Update: Do the rename according to the original question, and added support for .mht.
Use rename. With rename you can specify a renaming pattern:
find . -type f \( -name "*.mhtml" -o -name "*.mht" \) -print0 | xargs -0 -I'{}' rename 's/[:?|]//g' "{}"
This way you can properly handle names with spaces. xargs will replace {} with every names of file provided by the find command. Also note the use of -print0 and -0. This use a \0 as a separator so its avoid problems dealing with filnames containing \n (newline).
The -o was not working the way it was intended to. you must use parenthesis to group conditions.
You may also consider using -iname instead of -name if you deal with file ending with ".mHtml".

Resources