sh script to replace text in multiple files - shell

I am trying to replace every occurrence in a .prm file of the string "/net/origin/devdata1/slin" with "/tools/common/test/HATS" in over a hundred files using sed. I think I am having trouble with the proper syntax for a for loop that loops through different files in a directory(/home/AutoTest), and what/if I need as command line arguments. Thanks in advance.
OLD="/net/origin/devdata1/slin"
NEW="/toolscommon/test/HATS"
DIR="/home/AutoTest"
for f in $DIR
do
cp $f $f.bak
sed 's+$OLD+$NEW+g' $f.bak > $f
[ -f "$f" ]
rm -f $f.bak
done

Using sed
Try:
old="/net/origin/devdata1/slin"
new="/toolscommon/test/HATS"
dir="/home/AutoTest"
sed -i "s+$old+$new+g" "$dir"/*
sed -i will update files in-place.
Also, it is best practices to use lower or mixed case names for your shell variables. The system uses all-caps names for its variables and you don't want to accidentally overwrite one of them.
There is a potential danger here if old or new contained any sed-active characters. If so, arbitrary files could be deleted or mangled.
Using awk
old="/net/origin/devdata1/slin"
new="/toolscommon/test/HATS"
dir="/home/AutoTest"
awk -i inplace -v old="$old" -v new="$new" '{gsub(old, new)} 1' "$dir"/*
Because awk treats old and new as data and not code, this is safer than the sed version.

Related

Remove middle of filenames

I have a list of filenames like this in bash
UTSHoS10_Other_CAAGCC-TTAGGA_R_160418.R1.fq.gz
UTSHoS10_Other_CAAGCC-TTAGGA_R_160418.R2.fq.gz
UTSHoS11_Other_AGGCCT-TTAGGA_R_160418.R2.fq.gz
UTSHoS11_Other_AGGCCT-TTAGGA_R_160418.R2.fq.gz
UTSHoS12_Other_GGCAAG-TTAGGA_R_160418.R1.fq.gz
UTSHoS12_Other_GGCAAG-TTAGGA_R_160418.R2.fq.gz
And I want them to look like this
UTSHoS10_R1.fq.gz
UTSHoS10_R2.fq.gz
UTSHoS11_R1.fq.gz
UTSHoS11_R2.fq.gz
UTSHoS12_R1.fq.gz
UTSHoS12_R2.fq.gz
I do not have the perl rename command and sed 's/_Other*160418./_/' *.gz
is not doing anything. I've tried other rename scripts on here but either nothing occurs or my shell starts printing huge amounts of code to the console and freezes.
This post (Removing Middle of Filename) is similar however the answers given do not explain what specific parts of the command are doing so I could not apply it to my problem.
Parameter expansions in bash can perform string substitutions based on glob-like patterns, which allows for a more efficient solution than calling an extra external utility such as sed in each loop iteration:
for f in *.gz; do echo mv "$f" "${f/_Other_*-TTAGGA_R_160418./_}"; done
Remove the echo before mv to perform actual renaming.
You can do something like this in the directory which contains the files to be renamed:
for file_name in *.gz
do
new_file_name=$(sed 's/_[^.]*\./_/g' <<< "$file_name");
mv "$file_name" "$new_file_name";
done
The pattern (_[^.]*\.) starts matching from the FIRST _ till the FIRST . (both inclusive). [^.]* means 0 or more non-dot (or non-period) characters.
Example:
AMD$ ls
UTSHoS10_Other_CAAGCC-TTAGGA_R_160418.R1.fq.gz UTSHoS12_Other_GGCAAG-TTAGGA_R_160418.R1.fq.gz
UTSHoS10_Other_CAAGCC-TTAGGA_R_160418.R2.fq.gz UTSHoS12_Other_GGCAAG-TTAGGA_R_160418.R2.fq.gz
UTSHoS11_Other_AGGCCT-TTAGGA_R_160418.R2.fq.gz
AMD$ for file_name in *.gz
> do new_file_name=$(sed 's/_[^.]*\./_/g' <<< "$file_name")
> mv "$file_name" "$new_file_name"
> done
AMD$ ls
UTSHoS10_R1.fq.gz UTSHoS10_R2.fq.gz UTSHoS11_R2.fq.gz UTSHoS12_R1.fq.gz UTSHoS12_R2.fq.gz
Pure Bash, using substring operation and assuming that all file names have the same length:
for file in UTS*.gz; do
echo mv -i "$file" "${file:0:9}${file:38:8}"
done
Outputs:
mv -i UTSHoS10_Other_CAAGCC-TTAGGA_R_160418.R1.fq.gz UTSHoS10_R1.fq.gz
mv -i UTSHoS10_Other_CAAGCC-TTAGGA_R_160418.R2.fq.gz UTSHoS10_R2.fq.gz
mv -i UTSHoS11_Other_AGGCCT-TTAGGA_R_160418.R2.fq.gz UTSHoS11_R2.fq.gz
mv -i UTSHoS11_Other_AGGCCT-TTAGGA_R_160418.R2.fq.gz UTSHoS11_R2.fq.gz
mv -i UTSHoS12_Other_GGCAAG-TTAGGA_R_160418.R1.fq.gz UTSHoS12_R1.fq.gz
mv -i UTSHoS12_Other_GGCAAG-TTAGGA_R_160418.R2.fq.gz UTSHoS12_R2.fq.gz
Once verified, remove echo from the line inside the loop and run again.
Going with your sed command, this can work as a bash one-liner:
for name in UTSH*fq.gz; do newname=$(echo $name | sed 's/_Other.*160418\./_/'); echo mv $name $newname; done
Notes:
I've adjusted your sed command: it had an * without a preceeding . (sed takes a regular expression, not a globbing pattern). Similarly, the dot needs escaping.
To see if it works, without actually renaming the files, I've left the echo command in. Easy to remove just that to make it functional.
It doesn't have to be a one-liner, obviously. But sometimes, that makes editing and browsing your command-line history easier.

Iterating through files in a folder with sed

I've a list of csv-files and would like to use a for loop to edit the content for each file. I'd like to do that with sed. I have this sed commands which works fine when testing it on one file:
sed 's/[ "-]//g'
So now I want to execute this command for each file in a folder. I've tried this but so far no luck:
for i in *.csv; do sed 's/[ "-]//g' > $i.csv; done
I would like that he would overwrite each file with the edit performed by sed. The sed commands removes all spaces, the " and the '-' character.
Small changes,
for i in *.csv
do
sed -i 's/[ "-]//g' "$i"
done
Changes
when you iterate through the for you get the filenames in $i as example one.csv, two.csv etc. You can directly use these as input to the sed command.
-i Is for inline changes, the sed will do the substitution and updates the file for you. No output redirection is required.
In the code you wrote, I guess you missed any inputs to the sed command
In my case i want to replace every first occurrence of a particular string in each line for several text files, i've use the following:
//want to replace 16 with 1 in each files only for the first occurance
sed -i 's/16/1/' *.txt
In your case, In terminal you can try this
sed 's/[ "-]//g' *.csv
In certain scenarios it might be worth considering finding the files and executing a command on them like explained in this answer (as stated there, make sure echo $PATH doesn't contain .)
find /path/to/csv/ -type f '*.csv' -execdir sed -i 's/[ "-]//g' {} \;
here we:
find all files (type f) which end with .csv in the folder /path/to/csv/
sed the found files in place, ie we replace the original files with the changed version instead of creating numbered csv files ($i.csv)

Deleting first n rows and column x from multiple files using Bash script

I am aware that the "deleting n rows" and "deleting column x" questions have both been answered individually before. My current problem is that I'm writing my first bash script, and am having trouble making that script work the way I want it to.
file0001.csv (there are several hundred files like these in one folder)
Data number of lines 540
No.,Profile,Unit
1,1027.84,µm
2,1027.92,µm
3,1028,µm
4,1028.81,µm
Desired output
1,1027.84
2,1027.92
3,1028
4,1028.81
I am able to use sed and cut individually but for some reason the following bash script doesn't take cut into account. It also gives me an error "sed: can't read ls: No such file or directory", yet sed is successful and the output is saved to the original files.
sem2csv.sh
for files in 'ls *.csv' #list of all .csv files
do
sed '1,2d' -i $files | cut -f '1-2' -d ','
done
Actual output:
1,1027.84,µm
2,1027.92,µm
3,1028,µm
4,1028.81,µm
I know there may be awk one-liners but I would really like to understand why this particular bash script isn't running as intended. What am I missing?
The -i option of sed modifies the file in place. Your pipeline to cut receives no input because sed -i produces no output. Without this option, sed would write the results to standard output, instead of back to the file, and then your pipeline would work; but then you would have to take care of writing the results back to the original file yourself.
Moreover, single quotes inhibit expansion -- you are "looping" over the single literal string ls *.csv. The fact that you are not quoting it properly then causes the string to be subject to wildcard expansion inside the loop. So after variable interpolation, your sed command expands to
sed -i 1,2d ls *.csv
and then the shell expands *.csv because it is not quoted. (You should have been receiving a warning that there is no file named ls in the current directory, too.) You probably attempted to copy an example which used backticks (ASCII 96) instead of single quotes (ASCII 39) -- the difference is quite significant.
Anyway, the ls is useless -- the proper idiom is
for files in *.csv; do
sed '1,2d' "$files" ... # the double quotes here are important
done
Mixing sed and cut is usually not a good idea because you can express anything cut can do in terms of a simple sed script. So your entire script could be
for f in *.csv; do
sed -i -e '1,2d' -e 's/,[^,]*$//' "$f"
done
which says to remove the last comma and everything after it. (If your sed does not like multiple -e options, try with a semicolon separator: sed -i '1,2d;s/,[^,]*$//' "$f")
You may use awk,
$ awk 'NR>2{sub(/,[^,]*$/,"",$0);print}' file
1,1027.84
2,1027.92
3,1028
4,1028.81
or
sed -i '1,2d;s/,[^,]*$//' file
1,2d; for deleting the first two lines.
s/,[^,]*$// removes the last comma part in remaining lines.

Removing lines from multiple files with sed command

So, disclaimer: I am pretty new to using bash and zsh, so there is a chance the answer is really simple. Nonetheless. I checked previous postings and couldn't find anything. (edit: I have tried this in both bash and zsh shells- same problem.)
I have a directory with many files and am trying to remove the first line from each file.
So say the directory contains: file1.txt file2.txt file3.txt ... etc.
I am using the sed command (non-GNU):
sed -i -e "1d" *.txt
For some reason, this is only removing the first line of the first file. I thought that the *.txt would affect all files matching the pattern in directory. Strangely, it is creating the file duplicates with -e appended, but both the duplicate and original are the same.
I tried this with other commands (e.g. ls *.txt) and it works fine. Is there something about sed I am missing?
Thank you in advance.
Different versions of sed in differing operating systems support various parameters.
OpenBSD (5.4) sed
The -i flag is unavailable. You can use the following /bin/sh syntax:
for i in *.txt
do
f=`mktemp -p .`
sed -e "1d" "${i}" > "${f}" && mv -- "${f}" "${i}"
done
FreeBSD (11-CURRENT) sed
The -i flag requires an extension, even if it's empty. Thus must be written as sed -i "" -e "1d" *.txt
GNU sed
This looks to see if the argument following -i is another option (or possibly a command). If so, it assumes an in-place modification. If it appears to be a file extension such as ".bak", it will rename the original with the ".bak" and then modify it into the original file's name.
There might be other variations on other platforms, but those are the three I have at hand.
use it without -e !
for one file use:
sed -i '1d' filename
for all files use :
sed -i '1d' *.txt
or
files=/path/to/files/*.extension ; for var in $files ; do sed -i '1d' $var ; done
.for me i use ubuntu and debian based systems , this method is working for me 100% , but for other platformes i'm not sure , so this is other method :
replace first line with emty pattern , and remove empty lines , (double commands):
for files in $(ls /path/to/files/*.txt); do sed -i "s/$(head -1 "$files")//g" "$files" ; sed -i '/^$/d' "$files" ; done
Note: if your files contain splash '/' , then it will give error , so in this case sed command should look like this ( sed -i "s[$(head -1 "$files")[[g" )
hope that's what you're looking for :)
The issue here is that the line number isn't reset when sed opens a new file, so 1 only matches the first line of the first file.
One solution is to use a shell loop, calling sed once for each file. Gumnos' answer shows how to do this in the most widely compatible way, although if you have a version of sed supporting the -i flag, you could do this instead:
for i in *.txt; do
sed -i.bak '1d' "$i"
done
It is possible to avoid creating the backup file by passing an empty suffix but personally, I don't think it's such a bad thing. One day you'll be grateful for it!
It appears that you're not working with GNU tools but if you were, I would recommend using GNU awk for this task. The variable FNR is useful here, as it keeps track of the record number for each file individually, allowing you to do this:
gawk -i inplace 'FNR>1' *.txt
Using the inplace extension, this allows you to remove the first line from each of your files, by only printing the lines where FNR is greater than 1.
Testing it out:
$ seq 5 > file1
$ seq 5 > file2
$ gawk -i inplace 'FNR>1' file1 file2
$ cat file1
2
3
4
5
$ cat file2
2
3
4
5
The last argument you are passing to the Sed is the problem
try something like this.
var=(`find *txt`)
for file in "${var[#]}"
do
sed -i -e 1d $file
done
This did the trick for me.

sed delete not working with cat variable

I have a file named test-domain, the contents of which contain the line 100.am.
When I do this, the line with 100.am is deleted from the test-domain file, as expected:
for x in $(echo 100.am); do sed -i "/$x/d" test-domain; done
However, if instead of echo 100.am, I read each line from a file named unwanted-lines, it does NOT work.
for x in $(cat unwanted-lines); do sed -i "/$x/d" test-domain; done
This is even if the only contents of unwanted-lines is one line, with the exact contents 100.am.
Does anyone know why sed delete line works if you use echo in your variable, but not if you use cat?
fgrep -v -f unwanted-lines test-domain > /tmp/Buffer
mv /tmp/Buffer test-domain
sed is not interesting in this case due to multiple call in shell (poor efficiency and lot of ressources used). The way to still use sed is to preload line to delete, and make a search base on this preloaded info but very heavy compare to fgrep in this case
Does anyone know why sed delete line works if you use echo in your
variable, but not if you use cat?
I believe that your file containing unwanted lines contains CR+LF line endings due to which it doesn't work when you use the file. You could strip the CR in your loop:
for x in $(cat unwanted-lines); do x="${x//$'\r'}"; sed -i "/$x/d" test-domain; done
One better strategy than yours would be to use a genuine editor, e.g., ed, as so:
ed -s test-domain < <(
shopt -s extglob
while IFS= read -r l; do
[[ $l = *([[:space:]]) ]] && continue
l=${l//./\\.}
echo "g/$l/d"
done < unwanted-lines
echo "wq"
)
Caveat. You must make sure that the file unwanted-lines doesn't contain any character that could clash with ed's regexps and commands. I have already included a match for a period (i.e., replace . with \.).
This method is quite efficient, as you're not forking so many times on sed, writing temp files, renaming them, etc.
Another possibility would be to use grep, but then you won't have the editing option ed offers.
Remark. ed is the standard editor.
why not just applying the sed command on your file?
sed -i '/.*100\.am/d' your_file

Resources