Bash - output to loop using find - bash

I have a list of files in a file (files.txt):
file1.txt
file2.txt
file3.txt
When I execute:
cat files.txt | while read line; do find -name $line >> outfile.txt; done
Why don't I get an outfile with a list of those files' paths?
Whereas if I execute:
find -name file1.txt >> outfile.txt
the task is performed
Many thanks
Clive

Apparently you edited files.txt on a Windows box, then copied it to your Linux or Unix server without converting line endings. This is not an uncommon problem, and multiple solutions exist. :)
Windows uses CR+LF (\r\n) for newlines, whereas unix and linux use only LF (\n).
The first and possibly easiest option may be to re-copy the files with the appropriate conversion in place, if your copy program supports such a thing. If you copied the files using FTP protocol, check your client for a "type" option, which may be set to "ascii" or "bin". Depends on your client. If you're using something like scp, which transfers only binary files, then read on.
Another tried-and-true option is to use the dos2unix application which may already be installed on your unix or linux server. If it's not, you may be able to install it using your machine's package manager. (I don't know how, because you haven't mentioned what operating system you're using.) If installed, documentation for using dos2unix can be found by running man dos2unix. For example, if you want to convert all the text files matching *.txt in the current directory and all subdirectories under the current one, you might use the following:
find . -type f -name \*.txt -exec dos2unix -k -o {} \;
The options here work are as follows:
find . - tells find to search recursively, starting from the current directory.
-type f -name \*.txt restricts our search by file type and glob.
-exec runs the rest of the line up to \; on each file, with {} replaced with the filename.
dos2unix - well, you know.
-k - "keep" the timestamp on your original file.
-o - edit the "original" file rather than writing a new file.
If dos2unix isn't available, a number of other built-in tools may be able to do a similar job.
If you're running Linux, you can use GNU sed like this on just one file:
sed -i 's/\r$//' files.txt
Or to process all text files in the current directory:
for file in *.txt; do sed -i 's/\r$//' "$file"; done
Or to run this recursively, if you're using bash version 4 or higher:
shopt -s globstar
for file in **/*.txt; do sed -i 's/\r$//' "$file"; done
Another option for converting your line endings might be perl:
perl -pi -e 's/\r\n/\n/g' files.txt
You could easily make it handle multiple files either in one directory or recursively in a similar way to the options above, if you want.
One more option might be to leave the files as-is, and do your conversion as you process the files.txt in bash. For example:
while read line; do
find . -name "${line%$'\r'}"
done < files.txt > outfile.txt
This uses the shell's parameter expansion combined with bash's format expansion to "strip" the offending CR character from the end of each variable as it is read by your while loop.
Also of note:
https://en.wikipedia.org/wiki/Newline#Conversion_utilities

Related

How to append a String to all lines of all files in a directory in Bash?

I want to append a String to all lines of files inside a directory hierarchy. I know how to do this for a single file by the sed command :
sed -e 's/$/ sample string/' -i test.txt
but how to write a script that does this for all files found in a directory hierarchy (and its sub-directories and sub-sub-directories so on...)?
for all files found in a directory hierarchy (and its sub-directories and sub-sub-directories so on...)
This is a job for find, plain and simple.
For example, the command:
find . -type f -exec sed -i 's/$/ sample string/' {} \;
will execute that sed command on all regular files in and under the current directory.
The find command has lots more options (depth limiting, not crossing filesystem boundaries, only working on certain file masks, and so on) that you can use but the ones above are probably all you need for your immediate question.
Keep in mind that sed -i is in-place editing so, if you make a mistake, you better make sure you've backed up the original files before-hand.
In the case where you want to create new files (as requested in a comment), you can bypass the -i and create new files. The safest way to do this is probably by putting the commands you want to execute in an executable script (e.g., myprog.sh):
#!/usr/bin/env bash
origName="$1"
newName="$1.new"
sed s/$/ sample string/' "${origName}" > "${newName}"
Then call that from your find:
find . -type f -exec myprog.sh {} \;
Then you can make the target file name an arbitrarily complex value calculated from the original.

Bash Merging multiple files into single file after reading list of files from another external file

I have file with the name of filesList.txt which contain list of all files which needs to be merged into single file.
filesList.txt
------------------
../../folder/a.js
../../folder/b.js
../../folder/c.js
../../folder/d.js
Current I am running following commands.
cp filesList.txt filesList.sh
chmod 777 filesList.sh
vim filesList.sh
cat
../../folder/a.js
../../folder/b.js
../../folder/c.js
../../folder/d.js
> output.txt
RUN vim command j10 to make above multiline file into single line like this
cat ../../folder/a.js ../../folder/b.js ../../folder/c.js ../../folder/d.js > output.txt
save and quit file within vim using :wq
and run ./fileList.sh to create single output.text file in exact same order files are listed in.
My Question is what command I need to use to create a bash file which create external list of file(filesList.txt) line by line and generate and single file with its contents. So I don't have to conver my filesList.txt file into filesList.sh file each time I need to merge file.
A line-oriented file is a bad choice here (in a "any attacker who can control filenames can inject arbitrary files into your output" sense of bad; you probably don't want to risk that someone who figures out how to create new .js files matching your glob can then introduce /etc/passwd to the list by creating ../../$'\n'/etc/passwd$'\n'/hello.js). Instead, separate values by NULs, and use xargs -0 (a non-POSIX extension, but a popular one provided by major OS vendors) to convert those into arguments.
printf '%s\0' ../../folder/*.js >filesList.nsv # generate file w/ null-separated values
xargs -0 cat <filesList.nsv >output.txt # combine to argument list split on NUL
By the way, if you want to generate your list of files recursively, that first part would become:
find ../../folder -name '*.js' -print0 >filesList.nsv
...and if you don't have any other need for filesList.nsv, I'd just avoid it entirely and generate output.txt directly:
find ../../folder -name '*.js' -exec cat '{}' + >output.txt
If you must use newlines, but you have GNU xargs, at least use xargs -d $'\n' to process them to try to avoid other, quoting-related bugs found in stock xargs or more naive practices in bash:
printf '%s\n' ../../folder/*.js >filesList.txt # generate w/ newline-separated values
xargs -d $'\n' cat <filesList.txt >output.txt # combine on those values
If you don't have GNU xargs, then you can implement this yourself in shell:
# Newline-separated input
while IFS= read -r filename; do
cat "$filename"
done <filesList.txt >output.txt
# ...or NUL-separated input
while IFS= read -r -d '' filename; do
cat "$filename"
done <filesList.txt >output.txt

How will i use sed shell script for replacing a pattern in a list of file

I have some 100 files which have my name in it, RAHUL. I want it replaced with another term RAHUL2. I have a file which contains the list of files and i want to fetch it in a sed to do the changes.
files :
C:/desktop/file1.txt
C:/desktop/rahul/file1.txt
C:/desktop/rahul/file3.txt
C:/desktop/rahul/file4.txt
C:/desktop/rahul/file6.txt
C:/desktop/rahul/file8.txt
C:/desktop/rahul/file9.txt
and in each file data, i want to replace all occurance of term RAHUL with RAHUL2
I assume you are using some Cygwin environment on Windows, since you have sed. Then you can use find to list all the files and execute sed on them:
find C:/desktop -type f -name 'file*.txt' -exec sed -i 's/RAHUL/RAHUL2/g' {} \;
Please make a backup of the original files if you are using sed -i but aren't sure if the command is working already. This because sed -i will overwrite the original file. You have been warned.
hek2mgl's answer is good if you want to search for all files matching file*.txt pattern on your C:/desktop directory.
Now as you already have a file containing the list of files to edit, here is another way to proceed:
while read FILE ; do sed -i 's/RAHUL/RAHUL2/g' $FILE ; done < files_to_edit.txt
The read command will read your input one line after another. Input is files_to_edit.txt file, as indicated by the < input redirection operator.
Remark about -i option remains valid: it will edit your files in place, so make backup, or at least run it first on a couple of files (potentially without -i option, just to check what output is).

command line script to indent

I am looking for a simple command line script or shell script I can run that, with a given directory, will traverse all directories, sub directories and so on looking for files with .rb and indent them to two spaces, regardless of current indentation.
It should then look for html, erb and js files (as well as less/sass) and indent them to 4.
Is this something thats simple or am I just over engineering it? I dont know bash that well, I have tried to create something before but my friend said to use grep and I am lost. any help?
If you've got GNU sed with the -i option to overwrite the files (with backups for safety), then:
find . -name '*.rb' -exec sed -i .bak 's/^/ /' {} +
find . -name '*.html' -exec sed -i .bak 's/^/ /' {} +
Etc.
The find generates the list of file names; it executes the sed command, backs up the files (-i .bak) and does the appropriate substitutions as requested. The + means 'do as many files at one time as is convenient. This avoids problems with spaces in file names, amongst other issues.

Bash script to find files based on filename and do search replace on them

I have a bunch of files (with same name, say abc.txt) strewn all over the network filesystem.
I need to recursively search for each of those files and once I find each one, do a content search and replace on them.
After some research, I see that I can find the files using the find command (with -r to recurse right?). Something like:
find . -r -type f abc.txt
And use sed to do find and replace on each one:
sed -ie 's/search/replace/g' abc.txt
But I'm not sure how to glue the two together so that I find all occurrences of abc.txt and then do a search/replace on each one.
My directory tree is really large so I know a recursive search through all the directories will take a long time but that's better than manually trying to find each file.
I'm on OSX 10.6 with Bash.
Thanks all!
Update: I think the answer posted below may work for other OSes (linux perhaps) but for OSX, I had to tweak it a bit.
find . -name abc.text -exec sed -i '' 's/search/replace/g' {} +
the empty quotes seem to required after a -i in sed to indicate that we don't want to produce a backup file. The man page for this reads:
-i extension:
Edit files in-place, saving backups with the specified extension. If a zero-length extension is given, no backup will be saved.
find . -r -type f abc.txt -exec sed -i -e 's/search/replace/g' {} +

Resources