command line script to indent - bash

I am looking for a simple command line script or shell script I can run that, with a given directory, will traverse all directories, sub directories and so on looking for files with .rb and indent them to two spaces, regardless of current indentation.
It should then look for html, erb and js files (as well as less/sass) and indent them to 4.
Is this something thats simple or am I just over engineering it? I dont know bash that well, I have tried to create something before but my friend said to use grep and I am lost. any help?

If you've got GNU sed with the -i option to overwrite the files (with backups for safety), then:
find . -name '*.rb' -exec sed -i .bak 's/^/ /' {} +
find . -name '*.html' -exec sed -i .bak 's/^/ /' {} +
Etc.
The find generates the list of file names; it executes the sed command, backs up the files (-i .bak) and does the appropriate substitutions as requested. The + means 'do as many files at one time as is convenient. This avoids problems with spaces in file names, amongst other issues.

Related

How to append a String to all lines of all files in a directory in Bash?

I want to append a String to all lines of files inside a directory hierarchy. I know how to do this for a single file by the sed command :
sed -e 's/$/ sample string/' -i test.txt
but how to write a script that does this for all files found in a directory hierarchy (and its sub-directories and sub-sub-directories so on...)?
for all files found in a directory hierarchy (and its sub-directories and sub-sub-directories so on...)
This is a job for find, plain and simple.
For example, the command:
find . -type f -exec sed -i 's/$/ sample string/' {} \;
will execute that sed command on all regular files in and under the current directory.
The find command has lots more options (depth limiting, not crossing filesystem boundaries, only working on certain file masks, and so on) that you can use but the ones above are probably all you need for your immediate question.
Keep in mind that sed -i is in-place editing so, if you make a mistake, you better make sure you've backed up the original files before-hand.
In the case where you want to create new files (as requested in a comment), you can bypass the -i and create new files. The safest way to do this is probably by putting the commands you want to execute in an executable script (e.g., myprog.sh):
#!/usr/bin/env bash
origName="$1"
newName="$1.new"
sed s/$/ sample string/' "${origName}" > "${newName}"
Then call that from your find:
find . -type f -exec myprog.sh {} \;
Then you can make the target file name an arbitrarily complex value calculated from the original.

How do I use grep to scan a folder of .scss files and remove the first line if it is blank?

I am getting into linux and optimizing my workflow and would love an example. I have a whole bunch of .scss inside of nested folders and I need to check if the first line is blank, and if so delete it, then re-save the file. I'm working on windows at work, but like writing bash. I've experimented with :
grep -r "/^\n/"
But seems to return every blank line. Then I'm not too sure how to delete it and then re-save.
This may get you started:
find . -name '*.scss' -exec sed -i.bak '1{/^$/d}' {} \;
To understand this command, we can break it into two parts:
find . -name '*.scss' -exec ... \;
Starting with the current directory, this looks recursively for files with names ending with .scss and, when it finds one, it runs the command that follows -exec on it.
sed -i.bak '1{/^$/d}' {}
sed is a stream editor. The option -i.bak tells it to change files in-place, leaving behind back backup file. Before find runs this command, it will replace {} with the actual name of the file that it found.
1{...}' tells sed to select the first line of the file and apply to it the commands in braces.
/^$/ is a regular expression. It matches a line if the line is empty.
d tells sed to delete any matching line.
So, let's put that all together: if the first line of the file is empty, sed deletes it.
You will find many tutorials on the web on both find and sed. You can find detailed information on either using man find or man sed.
You don't usegrep to edit files, you should use sed.
https://www.gnu.org/software/sed/manual/sed.html

Bash - output to loop using find

I have a list of files in a file (files.txt):
file1.txt
file2.txt
file3.txt
When I execute:
cat files.txt | while read line; do find -name $line >> outfile.txt; done
Why don't I get an outfile with a list of those files' paths?
Whereas if I execute:
find -name file1.txt >> outfile.txt
the task is performed
Many thanks
Clive
Apparently you edited files.txt on a Windows box, then copied it to your Linux or Unix server without converting line endings. This is not an uncommon problem, and multiple solutions exist. :)
Windows uses CR+LF (\r\n) for newlines, whereas unix and linux use only LF (\n).
The first and possibly easiest option may be to re-copy the files with the appropriate conversion in place, if your copy program supports such a thing. If you copied the files using FTP protocol, check your client for a "type" option, which may be set to "ascii" or "bin". Depends on your client. If you're using something like scp, which transfers only binary files, then read on.
Another tried-and-true option is to use the dos2unix application which may already be installed on your unix or linux server. If it's not, you may be able to install it using your machine's package manager. (I don't know how, because you haven't mentioned what operating system you're using.) If installed, documentation for using dos2unix can be found by running man dos2unix. For example, if you want to convert all the text files matching *.txt in the current directory and all subdirectories under the current one, you might use the following:
find . -type f -name \*.txt -exec dos2unix -k -o {} \;
The options here work are as follows:
find . - tells find to search recursively, starting from the current directory.
-type f -name \*.txt restricts our search by file type and glob.
-exec runs the rest of the line up to \; on each file, with {} replaced with the filename.
dos2unix - well, you know.
-k - "keep" the timestamp on your original file.
-o - edit the "original" file rather than writing a new file.
If dos2unix isn't available, a number of other built-in tools may be able to do a similar job.
If you're running Linux, you can use GNU sed like this on just one file:
sed -i 's/\r$//' files.txt
Or to process all text files in the current directory:
for file in *.txt; do sed -i 's/\r$//' "$file"; done
Or to run this recursively, if you're using bash version 4 or higher:
shopt -s globstar
for file in **/*.txt; do sed -i 's/\r$//' "$file"; done
Another option for converting your line endings might be perl:
perl -pi -e 's/\r\n/\n/g' files.txt
You could easily make it handle multiple files either in one directory or recursively in a similar way to the options above, if you want.
One more option might be to leave the files as-is, and do your conversion as you process the files.txt in bash. For example:
while read line; do
find . -name "${line%$'\r'}"
done < files.txt > outfile.txt
This uses the shell's parameter expansion combined with bash's format expansion to "strip" the offending CR character from the end of each variable as it is read by your while loop.
Also of note:
https://en.wikipedia.org/wiki/Newline#Conversion_utilities

How to overwrite the contents in the sed, without having backup file

I have a command like this:
sed -i -e '/console.log/ s/^\/*/\/\//' *.js
which does comments out all console.log statements. But there are two things
It keeps the backup file like test.js-e , I doesn't want to do that.
Say I want to the same process recursive to the folder, how to do it?
You don't have to use -e option in this particular case as it is unnecessary. This will solve your 1st problem (as -e seems to be going as suffix for -i option).
For the 2nd part, u can try something like this:
for i in $(find . -type f -name "*.js"); do sed -i '/console.log/ s/^\/*/\/\//' $i; done;
Use find to recursively find all .js files and do the replacement.
When checking sed's help, -i takes a suffix and uses it as a backup,
-i[SUFFIX], --in-place[=SUFFIX]
edit files in place (makes backup if SUFFIX supplied)
and the output backup seems to be samefile + -e which is the second argument you're sending, try removing the space and see if that would work
sed -ie '/console.log/ s/^\/*/\/\//' *.js
As for the recursion, you could use find with -exec or xargs, please modify the find command and test it before running exec
find -name 'console.log' -type f -exec sed -ie '/console.log/ s/^\/*/\/\//' *.js \;
From your original post I presume you just want to make a C-style comment leading like:
/*
to a double back-slash style like:
//
right?
Then you can do it with this command
find . -name "*.js" -type f -exec sed -i '/console.log/ s#^/\*#//#g' '{}' \;
To be awared that:
in sed the split character normally be / but if you found that annoying to Escape when your replacing or matching string contains a / . You can change the split character to # or | as you like, I found it very useful trick.
if you do want to do is what I presumed, be sure that you should Escape the character *, because a combination of regex /* just means to match a pattern that / occurs one time or many times or none at all, that will match everything, it's very dangerous!

Bash script to find files based on filename and do search replace on them

I have a bunch of files (with same name, say abc.txt) strewn all over the network filesystem.
I need to recursively search for each of those files and once I find each one, do a content search and replace on them.
After some research, I see that I can find the files using the find command (with -r to recurse right?). Something like:
find . -r -type f abc.txt
And use sed to do find and replace on each one:
sed -ie 's/search/replace/g' abc.txt
But I'm not sure how to glue the two together so that I find all occurrences of abc.txt and then do a search/replace on each one.
My directory tree is really large so I know a recursive search through all the directories will take a long time but that's better than manually trying to find each file.
I'm on OSX 10.6 with Bash.
Thanks all!
Update: I think the answer posted below may work for other OSes (linux perhaps) but for OSX, I had to tweak it a bit.
find . -name abc.text -exec sed -i '' 's/search/replace/g' {} +
the empty quotes seem to required after a -i in sed to indicate that we don't want to produce a backup file. The man page for this reads:
-i extension:
Edit files in-place, saving backups with the specified extension. If a zero-length extension is given, no backup will be saved.
find . -r -type f abc.txt -exec sed -i -e 's/search/replace/g' {} +

Resources