Use zcat and sed or awk to edit compressed .gz text file - bash

I am trying to edit compressed fastq.gz text files, by removing the first six characters of lines 2,6,10,14... I have two different ways of doing this right now, either using awk or sed, but these only seem to work if the files are unzipped. I would like to edit the files without unzipping them and tried the following code without getting it to work. Thanks.
Using sed:
zcat /dir/* | sed -i~ '2~4s/^.\{6\}//'
Using awk:
zcat /dir/* | awk 'NR%4==2 {gsub(/^....../,"")} 1'

You can't bypass compression, but you can chain the decompress/edit/recompress together in an automated fashion:
for f in /dir/*; do
cp "$f" "$f~" &&
gzip -cd "$f~" | sed '2~4s/^.\{6\}//' | gzip > "$f"
done
If you're quite confident in the operation, you can remove the backup files by adding rm "$f~" to the end of the loop body.

I wrote a script called zawk which can do this natively. It's similar to glenn jackman's answer to a duplicate of this question, but it handles awk options and several different compression mechanisms and input methods while retaining FILENAME and FNR.
You'd use it like:
zawk 'awk logic goes here' log*.gz
This does not address sed's "in-place" flag (-i).

Related

How to delete a line (matching a pattern) from a text file? [duplicate]

How would I use sed to delete all lines in a text file that contain a specific string?
To remove the line and print the output to standard out:
sed '/pattern to match/d' ./infile
To directly modify the file – does not work with BSD sed:
sed -i '/pattern to match/d' ./infile
Same, but for BSD sed (Mac OS X and FreeBSD) – does not work with GNU sed:
sed -i '' '/pattern to match/d' ./infile
To directly modify the file (and create a backup) – works with BSD and GNU sed:
sed -i.bak '/pattern to match/d' ./infile
There are many other ways to delete lines with specific string besides sed:
AWK
awk '!/pattern/' file > temp && mv temp file
Ruby (1.9+)
ruby -i.bak -ne 'print if not /test/' file
Perl
perl -ni.bak -e "print unless /pattern/" file
Shell (bash 3.2 and later)
while read -r line
do
[[ ! $line =~ pattern ]] && echo "$line"
done <file > o
mv o file
GNU grep
grep -v "pattern" file > temp && mv temp file
And of course sed (printing the inverse is faster than actual deletion):
sed -n '/pattern/!p' file
You can use sed to replace lines in place in a file. However, it seems to be much slower than using grep for the inverse into a second file and then moving the second file over the original.
e.g.
sed -i '/pattern/d' filename
or
grep -v "pattern" filename > filename2; mv filename2 filename
The first command takes 3 times longer on my machine anyway.
The easy way to do it, with GNU sed:
sed --in-place '/some string here/d' yourfile
You may consider using ex (which is a standard Unix command-based editor):
ex +g/match/d -cwq file
where:
+ executes given Ex command (man ex), same as -c which executes wq (write and quit)
g/match/d - Ex command to delete lines with given match, see: Power of g
The above example is a POSIX-compliant method for in-place editing a file as per this post at Unix.SE and POSIX specifications for ex.
The difference with sed is that:
sed is a Stream EDitor, not a file editor.BashFAQ
Unless you enjoy unportable code, I/O overhead and some other bad side effects. So basically some parameters (such as in-place/-i) are non-standard FreeBSD extensions and may not be available on other operating systems.
I was struggling with this on Mac. Plus, I needed to do it using variable replacement.
So I used:
sed -i '' "/$pattern/d" $file
where $file is the file where deletion is needed and $pattern is the pattern to be matched for deletion.
I picked the '' from this comment.
The thing to note here is use of double quotes in "/$pattern/d". Variable won't work when we use single quotes.
You can also use this:
grep -v 'pattern' filename
Here -v will print only other than your pattern (that means invert match).
To get a inplace like result with grep you can do this:
echo "$(grep -v "pattern" filename)" >filename
I have made a small benchmark with a file which contains approximately 345 000 lines. The way with grep seems to be around 15 times faster than the sed method in this case.
I have tried both with and without the setting LC_ALL=C, it does not seem change the timings significantly. The search string (CDGA_00004.pdbqt.gz.tar) is somewhere in the middle of the file.
Here are the commands and the timings:
time sed -i "/CDGA_00004.pdbqt.gz.tar/d" /tmp/input.txt
real 0m0.711s
user 0m0.179s
sys 0m0.530s
time perl -ni -e 'print unless /CDGA_00004.pdbqt.gz.tar/' /tmp/input.txt
real 0m0.105s
user 0m0.088s
sys 0m0.016s
time (grep -v CDGA_00004.pdbqt.gz.tar /tmp/input.txt > /tmp/input.tmp; mv /tmp/input.tmp /tmp/input.txt )
real 0m0.046s
user 0m0.014s
sys 0m0.019s
Delete lines from all files that match the match
grep -rl 'text_to_search' . | xargs sed -i '/text_to_search/d'
SED:
'/James\|John/d'
-n '/James\|John/!p'
AWK:
'!/James|John/'
/James|John/ {next;} {print}
GREP:
-v 'James\|John'
perl -i -nle'/regexp/||print' file1 file2 file3
perl -i.bk -nle'/regexp/||print' file1 file2 file3
The first command edits the file(s) inplace (-i).
The second command does the same thing but keeps a copy or backup of the original file(s) by adding .bk to the file names (.bk can be changed to anything).
You can also delete a range of lines in a file.
For example to delete stored procedures in a SQL file.
sed '/CREATE PROCEDURE.*/,/END ;/d' sqllines.sql
This will remove all lines between CREATE PROCEDURE and END ;.
I have cleaned up many sql files withe this sed command.
echo -e "/thing_to_delete\ndd\033:x\n" | vim file_to_edit.txt
Just in case someone wants to do it for exact matches of strings, you can use the -w flag in grep - w for whole. That is, for example if you want to delete the lines that have number 11, but keep the lines with number 111:
-bash-4.1$ head file
1
11
111
-bash-4.1$ grep -v "11" file
1
-bash-4.1$ grep -w -v "11" file
1
111
It also works with the -f flag if you want to exclude several exact patterns at once. If "blacklist" is a file with several patterns on each line that you want to delete from "file":
grep -w -v -f blacklist file
to show the treated text in console
cat filename | sed '/text to remove/d'
to save treated text into a file
cat filename | sed '/text to remove/d' > newfile
to append treated text info an existing file
cat filename | sed '/text to remove/d' >> newfile
to treat already treated text, in this case remove more lines of what has been removed
cat filename | sed '/text to remove/d' | sed '/remove this too/d' | more
the | more will show text in chunks of one page at a time.
Curiously enough, the accepted answer does not actually answer the question directly. The question asks about using sed to replace a string, but the answer seems to presuppose knowledge of how to convert an arbitrary string into a regex.
Many programming language libraries have a function to perform such a transformation, e.g.
python: re.escape(STRING)
ruby: Regexp.escape(STRING)
java: Pattern.quote(STRING)
But how to do it on the command line?
Since this is a sed-oriented question, one approach would be to use sed itself:
sed 's/\([\[/({.*+^$?]\)/\\\1/g'
So given an arbitrary string $STRING we could write something like:
re=$(sed 's/\([\[({.*+^$?]\)/\\\1/g' <<< "$STRING")
sed "/$re/d" FILE
or as a one-liner:
sed "/$(sed 's/\([\[/({.*+^$?]\)/\\\1/g' <<< "$STRING")/d"
with variations as described elsewhere on this page.
cat filename | grep -v "pattern" > filename.1
mv filename.1 filename
You can use good old ed to edit a file in a similar fashion to the answer that uses ex. The big difference in this case is that ed takes its commands via standard input, not as command line arguments like ex can. When using it in a script, the usual way to accomodate this is to use printf to pipe commands to it:
printf "%s\n" "g/pattern/d" w | ed -s filename
or with a heredoc:
ed -s filename <<EOF
g/pattern/d
w
EOF
This solution is for doing the same operation on multiple file.
for file in *.txt; do grep -v "Matching Text" $file > temp_file.txt; mv temp_file.txt $file; done
I found most of the answers not useful for me, If you use vim I found this very easy and straightforward:
:g/<pattern>/d
Source

Iterating through files in a folder with sed

I've a list of csv-files and would like to use a for loop to edit the content for each file. I'd like to do that with sed. I have this sed commands which works fine when testing it on one file:
sed 's/[ "-]//g'
So now I want to execute this command for each file in a folder. I've tried this but so far no luck:
for i in *.csv; do sed 's/[ "-]//g' > $i.csv; done
I would like that he would overwrite each file with the edit performed by sed. The sed commands removes all spaces, the " and the '-' character.
Small changes,
for i in *.csv
do
sed -i 's/[ "-]//g' "$i"
done
Changes
when you iterate through the for you get the filenames in $i as example one.csv, two.csv etc. You can directly use these as input to the sed command.
-i Is for inline changes, the sed will do the substitution and updates the file for you. No output redirection is required.
In the code you wrote, I guess you missed any inputs to the sed command
In my case i want to replace every first occurrence of a particular string in each line for several text files, i've use the following:
//want to replace 16 with 1 in each files only for the first occurance
sed -i 's/16/1/' *.txt
In your case, In terminal you can try this
sed 's/[ "-]//g' *.csv
In certain scenarios it might be worth considering finding the files and executing a command on them like explained in this answer (as stated there, make sure echo $PATH doesn't contain .)
find /path/to/csv/ -type f '*.csv' -execdir sed -i 's/[ "-]//g' {} \;
here we:
find all files (type f) which end with .csv in the folder /path/to/csv/
sed the found files in place, ie we replace the original files with the changed version instead of creating numbered csv files ($i.csv)

Removing lines from multiple files with sed command

So, disclaimer: I am pretty new to using bash and zsh, so there is a chance the answer is really simple. Nonetheless. I checked previous postings and couldn't find anything. (edit: I have tried this in both bash and zsh shells- same problem.)
I have a directory with many files and am trying to remove the first line from each file.
So say the directory contains: file1.txt file2.txt file3.txt ... etc.
I am using the sed command (non-GNU):
sed -i -e "1d" *.txt
For some reason, this is only removing the first line of the first file. I thought that the *.txt would affect all files matching the pattern in directory. Strangely, it is creating the file duplicates with -e appended, but both the duplicate and original are the same.
I tried this with other commands (e.g. ls *.txt) and it works fine. Is there something about sed I am missing?
Thank you in advance.
Different versions of sed in differing operating systems support various parameters.
OpenBSD (5.4) sed
The -i flag is unavailable. You can use the following /bin/sh syntax:
for i in *.txt
do
f=`mktemp -p .`
sed -e "1d" "${i}" > "${f}" && mv -- "${f}" "${i}"
done
FreeBSD (11-CURRENT) sed
The -i flag requires an extension, even if it's empty. Thus must be written as sed -i "" -e "1d" *.txt
GNU sed
This looks to see if the argument following -i is another option (or possibly a command). If so, it assumes an in-place modification. If it appears to be a file extension such as ".bak", it will rename the original with the ".bak" and then modify it into the original file's name.
There might be other variations on other platforms, but those are the three I have at hand.
use it without -e !
for one file use:
sed -i '1d' filename
for all files use :
sed -i '1d' *.txt
or
files=/path/to/files/*.extension ; for var in $files ; do sed -i '1d' $var ; done
.for me i use ubuntu and debian based systems , this method is working for me 100% , but for other platformes i'm not sure , so this is other method :
replace first line with emty pattern , and remove empty lines , (double commands):
for files in $(ls /path/to/files/*.txt); do sed -i "s/$(head -1 "$files")//g" "$files" ; sed -i '/^$/d' "$files" ; done
Note: if your files contain splash '/' , then it will give error , so in this case sed command should look like this ( sed -i "s[$(head -1 "$files")[[g" )
hope that's what you're looking for :)
The issue here is that the line number isn't reset when sed opens a new file, so 1 only matches the first line of the first file.
One solution is to use a shell loop, calling sed once for each file. Gumnos' answer shows how to do this in the most widely compatible way, although if you have a version of sed supporting the -i flag, you could do this instead:
for i in *.txt; do
sed -i.bak '1d' "$i"
done
It is possible to avoid creating the backup file by passing an empty suffix but personally, I don't think it's such a bad thing. One day you'll be grateful for it!
It appears that you're not working with GNU tools but if you were, I would recommend using GNU awk for this task. The variable FNR is useful here, as it keeps track of the record number for each file individually, allowing you to do this:
gawk -i inplace 'FNR>1' *.txt
Using the inplace extension, this allows you to remove the first line from each of your files, by only printing the lines where FNR is greater than 1.
Testing it out:
$ seq 5 > file1
$ seq 5 > file2
$ gawk -i inplace 'FNR>1' file1 file2
$ cat file1
2
3
4
5
$ cat file2
2
3
4
5
The last argument you are passing to the Sed is the problem
try something like this.
var=(`find *txt`)
for file in "${var[#]}"
do
sed -i -e 1d $file
done
This did the trick for me.

remove files less than a cetain size and extract filenames

I am working on a cluster remotely and give a few thousands of jobs. Some jobs crash early. I need to move the output files of those jobs (smaller than 1KB) to another folder and start them again. I guess find can move them with something like:
find . -size -1000c -exec mv {} ../crashed \;
but I also need to restart these crashed jobs. Output files in a bunch of folders in output folder and I need folder name and file name(without extantion) seperatly.
I guess sed or/and awk can do this easily but i am not sure how. By the way i am working on BASH shell.
I am trying to use cut, which seems to be working:
for i in $( find . -size -1000c )
do
FOLDER=$(echo "${i%.*}" | cut -d'/' -f2)
FILENAME=$(echo "${i%.*}" | cut -d'/' -f3)
done
But wouldn't it be better using sed or awk? And how?
Sed is a stream editor and since you're not changing anything I wouldn't use it in this case. You could use awk instead of cut like this:
FOLDER=$(echo "${i%.*}" | awk -v FS="/" '{ print $2 }')
where the -v FS="/" specifies that the variable FS (field separator, is a slash, kind of the same as what you do with the -d option in cut) and print $2 tells awk to print only the second field.
Same goes for the other instruction you have there. In your case what you have to do is simple enough, so cut actually cuts it :D
I usually use awk for more complicated tasks, involving multiple files and/or mathematical computations.
Edit:
note that I'm using gawk here (the awk implementation by GNU). I'm not sure you can pass a variable value with the -v option in other implementations, they'll have their way to do it.

Bash shell remove rows from a text file

i have a big domain lists file for a proxy filter. In another file I have some exceptions i would like to remove from filter file all rows of excpetions file. Is it possibile with some "sed" operation?
Thanks.
You can generally use grep with the -v and -f options for this. In fact, you probably want to use fgrep or the -F flag as well to ensure the string are considered as fixed string rather than regexes. Without that, for example, the first line of the infile file below will be removed despite not actually matching the fixed string.
-v reverses the sense that that matching lines are thrown away rather than kept, and -f will get the patterns from a file rather than the command line.
For example:
pax> cat infile
http://wwwxdodgy.com/rest-of-url
http://www.dodgy.com/rest-of-url
ftp://this/one/is/good
https://www.bad.org/rest-of-url
pax> cat exceptions
http://www.dodgy.com
https://www.bad.org
pax> fgrep -v -f exceptions infile
ftp://this/one/is/good
It is easier to do this with grep:
grep -v -x -F -f /path/to/exclude /path/to/file

Resources