Removing text using grep - bash

Trying to remove a line that contains a particular pattern in a text file. I have the following code which does not work
grep -v "$varName" config.txt
Can anyone tell me how I can make it work properly, I want to make it work using grep and not sed.

you can use sed, with in place -i
sed -i '/pattern/d' file

grep doesn't modify files. The best you can do if you insist on using grep and not sed is
grep -v "$varName" config.txt > $$ && mv $$ config.txt
Note that I'm using $$ as the temporary file name because it's the pid of your bash script, and therefore probably not a file name going to be used by some other bash script. I'd encourage using $$ in temp file names in bash, especially ones that might be run multiple times simultaneously.

try using -Ev
grep -Ev 'item0|item1|item2|item3'
That will delete lines containing item[0-3]. let me know if this helps

Related

I want to pipe grep output to sed for input

I'm trying to pipe the output of grep to sed so it will only edit specific files. I don't want sed to edit something without changing it. (Changing the modified date.)
I'm searching with grep and writing with sed. That's it
The thing I am trying to change is a dash, not the normal type, a special type. "-" is normal. "–" isn't normal
The code I currently have:
sed -i 's/– foobar/- foobar/g' * ; perl-rename 's/– foobar/- foobar/' *'– foobar'*
Sorry about the trouble, I'm inexperienced.
Are you sure about what you want to achieve? Let me explain you:
grep "string_in_file" <filelist> | sed <sed_script>
This is first showing the "string_in_file", preceeded by the filename.
If you launch a sed on this, then it will just show you the result of that sed-script on screen, but it will not change the files itself. In order to do this, you need the following:
grep -l "string_in_file" <filelist> | sed <sed_script_on_file>
The grep -l shows you some filenames, and the new sed_script_on_file needs to be a script, reading the file, and altering it.
Thank you all for helping, I'm sorry about not being fast in responding
After a bit of fiddling with the command, I got it:
grep -l 'old' * | xargs -d '\n' sed -i 's/old/new/'
This should only touch files that contain old and leave all other files.
This might be what you're trying to do if your file names don't contain newlines:
grep -l -- 'old' * | xargs sed -i 's/old/new/'

Remove Lines in Multiple Text Files that Begin with a Certain Word

I have hundreds of text files in one directory. For all files, I want to delete all the lines that begin with HETATM. I would need a csh or bash code.
I would think you would use grep, but I'm not sure.
Use sed like this:
sed -i -e '/^HETATM/d' *.txt
to process all files in place.
-i means "in place".
-e means to execute the command that follows.
/^HETATM/ means "find lines starting with HETATM", and the following d means "delete".
Make a backup first!
If you really want to do it with grep, you could do this:
#!/bin/bash
for f in *.txt
do
grep -v "^HETATM" "%f" > $$.tmp && mv $$.tmp "$f"
done
It makes a temporary file of the output from grep (in file $$.tmp) and only overwrites your original file if the command executes successfully.
Using the -v option of grep to get all the lines that do not match:
grep -v '^HETATM' input.txt > output.txt

Bash shell remove rows from a text file

i have a big domain lists file for a proxy filter. In another file I have some exceptions i would like to remove from filter file all rows of excpetions file. Is it possibile with some "sed" operation?
Thanks.
You can generally use grep with the -v and -f options for this. In fact, you probably want to use fgrep or the -F flag as well to ensure the string are considered as fixed string rather than regexes. Without that, for example, the first line of the infile file below will be removed despite not actually matching the fixed string.
-v reverses the sense that that matching lines are thrown away rather than kept, and -f will get the patterns from a file rather than the command line.
For example:
pax> cat infile
http://wwwxdodgy.com/rest-of-url
http://www.dodgy.com/rest-of-url
ftp://this/one/is/good
https://www.bad.org/rest-of-url
pax> cat exceptions
http://www.dodgy.com
https://www.bad.org
pax> fgrep -v -f exceptions infile
ftp://this/one/is/good
It is easier to do this with grep:
grep -v -x -F -f /path/to/exclude /path/to/file

sed command creates randomly named files

I recently wrote a script that does a sed command, to replace all the occurrences of "string1" with "string2" in a file named "test.txt".
It looks like this:
sed -i 's/string1/string2/g' test.txt
The catch is, "string1" does not necessarily exist in test.txt.
I notice after executing a bunch of these sed commands, I get a number of empty files, left behind in the directory, with names that look like this:
"sed4l4DpD"
Does anyone know why this might be, and how I can correct it?
-i is the suffix given to the new/output file. Also, you need -e for the command.
Here's how you use it:
sed -i '2' -e 's/string1/string2/g' test.txt
This will create a file called test.txt2 that is the backup of test.txt
To replace the file (instead of creating a new copy - called an "in-place" substitution), change the -i value to '' (ie blank):
sed -i '' -e 's/string1/string2/g' test.txt
EDIT II
Here's actual command line output from a Mac (Snow Leopard) that show that my modified answer (removed space from between the -i and the suffix) is correct.
NOTE: On a linux server, there must be no space between it -i and the suffix.
> echo "this is a test" > test.txt
> cat test.txt
this is a test
> sed -i '2' -e 's/a/a good/' test.txt
> ls test*
test.txt test.txt2
> cat test.txt
this is a good test
> cat test.txt2
this is a test
> sed -i '' -e 's/a/a really/' test.txt
> ls test*
test.txt test.txt2
> cat test.txt
this is a really good test
I wasn't able to reproduce this with a quick test (using GNU sed 4.2.1) -- but strace did show sed creating a file called sedJd9Cuy and then renaming it to tmp (the file named on the command line).
It looks like something is going wrong after sed creates the temporary file and before it's able to rename it.
My best guess is that you've run out of room in the filesystem; you're able to create a new empty file, but unable to write to it.
What does df . say?
EDIT:
I still don't know what's causing the problem, but it shouldn't be too difficult to work around it.
Rather than
sed -i 's/string1/string2/g' test.txt
try something like this:
sed 's/string1/string2/g' test.txt > test.txt.$$ && mv -f test.txt.$$ test.txt
Something is going wrong with the way sed creates and then renames a text file to replace your original file. The above command uses sed as a simple input-output filter and creates and renames the temporary file separately.
So after much testing last night, it turns out that sed was creating these files when trying to operate on an empty string. The way i was getting the array of "$string1" arguments was through a grep command, which seems to be malformed. What I wanted from the grep was all lines containing something of the type "Text here '.'".
For example the string, "Text here 'ABC.DEF'" in a file, should have been caught by grep, then the ABC.DEF portion of the string, would be substituted by ABC_DEF. Unfortunately the grep I was using would catch lines of the type "Text here ''" (that is, nothing between the ''). When later on, the script attempted to perform a sed replacement using this empty string, the random file was created (probably because sed died).
Thanks for all your help in understanding how sed works.
Its better if you do it in this way:
cat large_file | sed 's/string1/string2/g' > file_filtred

Extracting all lines from a file that are not commented out in a shell script

I'm trying to extract lines from certain files that do not begin with # (commented out). How would I run through a file, ignore everything with a # in front of it, but copy each line that does not start with a # into a different file.
Thanks
Simpler: grep -v '^[[:space:]]*#' input.txt > output.txt
This assumes that you're using Unix/Linux shell and the available Unix toolkit of commands AND that you want to keep a copy of the original file.
cp file file.orig
mv file file.fix
sed '/^[ ]*#/d' file.fix > file
rm file.fix
Or if you've got a nice shiny new GNU sed that all be summarized as
cp file file.orig
sed -i '/^[ ]*#/d' file
In both cases, the regexp in the sed command is meant to be [spaceCharTabChar]
So you saying, delete any line that begins with an (optional space or tab chars) #, but print everything else.
I hope this helps.
grep -v ^\# file > newfile
grep -v ^\# file | grep -v ^$ > newfile
Not fancy regex, but I provide this method to Jr. Admins as it helps with understanding of pipes and redirection.

Resources