using find and sed in Windows 7 to recursively modify files - windows-7

I'm trying to use find in Windows 7 with GNU sed to recursively replace a line of text in multiple files, across multiple directories. I looked at this question but the PowerShell solution seems to work with only one file, and I want to work with all files with a certain extension, recursively from the current directory. I tried this command:
find "*.mako" -exec sed -i "s:<%inherit file="layout.mako"/>:<%inherit file="../layout.mako"/>:"
But that gives me a bunch of crap and doesn't change any files:
---------- EDIT.MAKO
File not found - -EXEC
File not found - SED
File not found - -I
File not found - LAYOUT.MAKO/>:<%INHERIT FILE=../LAYOUT.MAKO/>:
How can I do this? It seems like I should have all the tools installed that I need, without having to install Cygwin or UnixUtils or anything else.
Edit: okay, working with GNU find, I still can't get anywhere, because I can't get the find part to work:
> gfind -iname "*.mako" .
C:\Program Files (x86)\GnuWin32\bin\gfind.exe: paths must precede expression
> gfind . -iname "*.mako"
C:\Program Files (x86)\GnuWin32\bin\gfind.exe: paths must precede expression
> gfind -iname "*.mako" .
C:\Program Files (x86)\GnuWin32\bin\gfind.exe: paths must precede expression
I was originally not using GNU find in Windows 7 because of this question.
Edit:
I tried the following, but sed doesn't see any input files this way:
> ls -r | grep mako | sed -i 's/file="layout.mako"/file="..\/layout.mako"/'
sed.exe: no input files

FIND from windows is being found instead of find from gnu.
So, rename your find.exe (from gnu) to gfind.exe (for example) and then call gfind instead of find when you wish to run it.
[edit]
gfind . -name "*.mako" (not gfind -iname "*.make" .)
[/edit]

You're executing the regular windows 'find' command, which has completely different command line arguments than gnu find. MS find has no capability of executing a program for each match, it simply searches.

Addition to Marc B/KevinDTimm answers: your find syntax is wrong.
It is not:
find "*.mako"
but:
find -name "*.mako"
Also, if there are directories that matches "*.mako", they would be sent to sed. To avoid that:
find -name "*.mako" -type f
Finally, I think that you are missing a '\;' at the end or your find command.

In Powershell, note the escape sequence using backticks
find . -type f -exec grep hello `{`} `;
Its much easier to use xargs
find . | xargs grep hello

I tried the following, but sed doesn't see any input files this way:
ls -r | grep mako | sed -i 's/file="layout.mako"/file="../layout.mako"/'
sed.exe: no input files
With this you are now running into PowerShell's "ls" alias. Either call "ls.exe" or go all PowerShell like this:
ls -r | select-string mako -list | select -exp path | sed -i 's/file="layout.mako"/file="..\/layout.mako"/'
Edit:
Workaround if stdin handling doesn't seem to be working.
ls -r | select-string mako -list | select -exp path | % {sed -i 's/file="layout.mako"/file="..\/layout.mako"/' $_}

Per your
Edit:
I tried the following, but sed doesn't see any input files this way:
ls -r | grep mako | sed -i
's/file="layout.mako"/file="../layout.mako"/' sed.exe: no input files
you need to use xargs to assemble the list of files passed to sed, i.e.
ls -r | grep mako | xargs sed -i
's\:file="layout.mako":file="../layout.mako":'
Note that for most versions of sed, you can use an alternate character to identify the substitute match/replace strings (usually '/'). Some seds require escaping that alternate char, which I have done in this example.
I hope this helps.

Related

SED cannot open subdirectory

I have written some code that checks all the folders in the file and matches a regex statement to them however, there is a subfolder in the directory that I want it to enter and also perform the regex on but whenever I run the code it gives me this error
sed: couldn't edit TestFolder: not a regular file
I've looked all over S.O and the Internet and can't find anything helpful
I've tried to use code I've found to fix my problem but it isn't helping so I apologise for the potentially hideous code, it's pulled from various sources
`pwd = "$PWD"
find $pwd = -print | xargs -0 sed -i "/10.0.0.10/d" !(test.sh)
My directory structure follows
Test
-one.txt
-two.txt
TestFolder
-three.txt
With GNU find, GNU xargs, and GNU sed:
find . -type f -not -name 'test.sh' -print0 | xargs -0 sed -i '/\<10\.0\.0\.10\>/d'

execute command on files returned by grep

Say I want to edit every .html file in a directory one after the other using vim, I can do this with:
find . -name "*.html" -exec vim {} \;
But what if I only want to edit every html file containing a certain string one after the other? I use grep to find files containing those strings, but how can I pipe each one to vim similar to the find command. Perphaps I should use something other than grep, or somehow pipe the find command to grep and then exec vim. Does anyone know how to edit files containing a certain string one after the other, in the same fashion the find command example I give above would?
grep -l 'certain string' *.html | xargs vim
This assumes you don't have eccentric file names with spaces etc in them. If you have to deal with eccentric file names, check whether your grep has a -z option to terminate output lines with null bytes (and xargs has a -0 option to read such inputs), and if so, then:
grep -zl 'certain string' *.html | xargs -0 vim
If you need to search subdirectories, maybe your version of Bash has support for **:
grep -zl 'certain string' **/*.html | xargs -0 vim
Note: these commands run vim on batches of files. If you must run it once per file, then you need to use -n 1 as extra options to xargs before you mention vim. If you have GNU xargs, you can use -r to prevent it running vim when there are no file names in its input (none of the files scanned by grep contain the 'certain string').
The variations can be continued as you invent new ways to confuse things.
With find :
find . -type f -name '*.html' -exec bash -c 'grep -q "yourtext" "${1}" && vim "${1}"' _ {} \;
On each files, calls bash commands that grep the file with yourtext and open it with vim if text is matching.
Solution with a for cycle:
for i in $(find . -type f -name '*.html'); do vim $i; done
This should open all files in a separate vim session once you close the previous.

Bash find filter and copy - trouble with spaces

So after a lot of searching and trying to interpret others' questions and answers to my needs, I decided to ask for myself.
I'm trying to take a directory structure full of images and place all the images (regardless of extension) in a single folder. In addition to this, I want to be able to remove images matching certain filenames in the process. I have a find command working that outputs all the filepaths for me
find -type f -exec file -i -- {} + | grep -i image | sed 's/\:.*//'
but if I try to use that to copy files, I have trouble with the spaces in the filenames.
cp `find -type f -exec file -i -- {} + | grep -i image | sed 's/\:.*//'` out/
What am I doing wrong, and is there a better way to do this?
With the caveat that it won't work if files have newlines in their names:
find . -type f -exec file -i -- {} + |
awk -vFS=: -vOFS=: '$NF ~ /image/{NF--;printf "%s\0", $0}' |
xargs -0 cp -t out/
(Based on answer by Jonathan Leffler and subsequent comments discussion with him and #devnull.)
The find command works well if none of the file names contain any newlines. Within broad limits, the grep command works OK under the same circumstances. The sed command works fine as long as there are no colons in the file names. However, given that there are spaces in the names, the use of $(...) (command substitution, also indicated by back-ticks `...`) is a disaster. Unfortunately, xargs isn't readily a part of the solution; it splits on spaces by default. Because you have to run file and grep in the middle, you can't easily use the -print0 option to (GNU) find and the -0 option to (GNU) xargs.
In some respects, it is crude, but in many ways, it is easiest if you write an executable shell script that can be invoked by find:
#!/bin/bash
for file in "$#"
do
if file -i -- "$file" | grep -i -q "$file:.*image"
then cp "$file" out/
fi
done
This is a little painful in that it invokes file and grep separately for each name, but it is reliable. The file command is even safe if the file name contains a newline; the grep is probably not.
If that script is called 'copyimage.sh', then the find command becomes:
find . -type f -exec ./copyimage.sh {} +
And, given the way the grep command is written, the copyimage.sh file won't be copied, even though its name contains the magic word 'image'.
Pipe the results of your find command to
xargs -l --replace cp "{}" out/
Example of how this works for me on Ubuntu 10.04:
atomic#atomic-desktop:~/temp$ ls
img.png img space.png
atomic#atomic-desktop:~/temp$ mkdir out
atomic#atomic-desktop:~/temp$ find -type f -exec file -i \{\} \; | grep -i image | sed 's/\:.*//' | xargs -l --replace cp -v "{}" out/
`./img.png' -> `out/img.png'
`./img space.png' -> `out/img space.png'
atomic#atomic-desktop:~/temp$ ls out
img.png img space.png
atomic#atomic-desktop:~/temp$

Removing files with a double quote in their name

I am trying to remove files within a directory. Some of the files have double-quotes around their name while others do not. An example of these files would be:
"DDD344".csv
D2DW.csv
Both these files are located in sub-directories within the directory YM.
To find such files and remove them, I invoke find like so:
find YM -name "*.csv" -print | xargs rm
The above command results in a lot of No such file or directory errors.
I tried using sed in the following way:
find yum/yum_hyd -name "\"*\".csv" | sed 's/"/\"/g' | xargs rm
but to no avail. How do I remove the files?
The problem is that you're using xargs. xargs is a horribly broken program that should never be used for anything except in conjunction with the nonstandard -0 option. Even so, I can't think of any advantages to doing that in this case. You should just execute rm directly from find.
find . -type f -name '"*".csv' -exec rm -f -- {} +
Will work. If you have GNU find, you may also use -delete.
try this:
find yum/yum_hyd -name "\"*\".csv" |sed 's/"/\\"/g'|xargs rm
explanation:
you want to replace " with \". but if you write \" directly, sed considers it as plain ", you have to escape the backslash. so \\" works.
I wasn't aware of this option until recently but you can list the inode of the file in the following way:
$ ls –il
In the output you will see that the first column contains the inode value. You can then use that value to find -inum the offending files and remove them.
Output
2616366 -rw-r--r-- 1 etc etc
$ find . -inum 2616366 -exec rm -f {} \;
This will remove the file with that specific inum.
As a test you can run the following to locate your files.
ls -il \"* | awk '{print $1}' | xargs -n1 -I {} find -inum {}
Replace the final portion of this command (the "find -inum {}") with the "rm" command once you are satisfied.
This is also similar to the question on SuperUser

How to echo directories containing matching file with Bash?

I want to write a bash script which will use a list of all the directories containing specific files. I can use find to echo the path of each and every matching file. I only want to list the path to the directory containing at least one matching file.
For example, given the following directory structure:
dir1/
matches1
matches2
dir2/
no-match
The command (looking for 'matches*') will only output the path to dir1.
As extra background, I'm using this to find each directory which contains a Java .class file.
find . -name '*.class' -printf '%h\n' | sort -u
From man find:
-printf format
%h Leading directories of file’s name (all but the last element). If the file name contains no slashes (since it is in the current directory) the %h specifier expands to ".".
On OS X and FreeBSD, with a find that lacks the -printf option, this will work:
find . -name *.class -print0 | xargs -0 -n1 dirname | sort --unique
The -n1 in xargs sets to 1 the maximum number of arguments taken from standard input for each invocation of dirname
GNU find
find /root_path -type f -iname "*.class" -printf "%h\n" | sort -u
Ok, i come way too late, but you also could do it without find, to answer specifically to "matching file with Bash" (or at least a POSIX shell).
ls */*.class | while read; do
echo ${REPLY%/*}
done | sort -u
The ${VARNAME%/*} will strip everything after the last / (if you wanted to strip everything after the first, it would have been ${VARNAME%%/*}).
Regards.
find / -name *.class -printf '%h\n' | sort --unique
Far too late, but this might be helpful to future readers:
I personally find it more helpful to have the list of folders printed into a file, rather than to Terminal (on a Mac).
For that, you can simply output the paths to a file, e.g. folders.txt, by using:
find . -name *.sql -print0 | xargs -0 -n1 dirname | sort --unique > folders.txt
How about this?
find dirs/ -name '*.class' -exec dirname '{}' \; | awk '!seen[$0]++'
For the awk command, see #43 on this list

Resources