Extract range of lines using sed - bash

I have defined two variables as follows:
var1=$(unzip -c ./*.zip | grep -n "Channel8"| cut -f1 -d":")
var2=$(unzip -c ./*.zip | grep -n "Channel10"| cut -f1 -d":")
I have a very big file and I would like to extract the range of lines between $var1 and $var2 using sed. I am trying the following
sed -n '/"$var1","$var"2p' $(unzip -c ./*.zip)
But with no success. Could you give an explanation why and how to fix it? Thanks.

You can use:
unzip -c ./*.zip | sed -n "$var1,$var2 p"
Fixes are:
Not using single quotes around shell variable
Removal of leading / from sed command
Use of pipeline instead of command substitution

Variables aren't expanded inside single quotes. Also, you need to pipe the output of unzip to sed, not use it as command-line arguments.
unzip -c ./*.zip | sed -n "${var1},${var2}p"
But it seems like you're doing this the hard way, reading the zip file 3 times. Just use the pattern you want to match as the range:
unzip -c ./*.zip | sed -n '/^extracting:.*Channel8/,/^extracting:.*Channel10/p'

Use double quotes to expand the vars:
sed -n "${var1},${var2}p" $(unzip -c ./*.zip)

Related

User input into variables and grep a file for pattern

H!
So I am trying to run a script which looks for a string pattern.
For example, from a file I want to find 2 words, located separately
"I like toast, toast is amazing. Bread is just toast before it was toasted."
I want to invoke it from the command line using something like this:
./myscript.sh myfile.txt "toast bread"
My code so far:
text_file=$1
keyword_first=$2
keyword_second=$3
find_keyword=$(cat $text_file | grep -w "$keyword_first""$keyword_second" )
echo $find_keyword
i have tried a few different ways. Directly from the command line I can make it run using:
cat myfile.txt | grep -E 'toast|bread'
I'm trying to put the user input into variables and use the variables to grep the file
You seem to be looking simply for
grep -E "$2|$3" "$1"
What works on the command line will also work in a script, though you will need to switch to double quotes for the shell to replace variables inside the quotes.
In this case, the -E option can be replaced with multiple -e options, too.
grep -e "$2" -e "$3" "$1"
You can pipe to grep twice:
find_keyword=$(cat $text_file | grep -w "$keyword_first" | grep -w "$keyword_second")
Note that your search word "bread" is not found because the string contains the uppercase "Bread". If you want to find the words regardless of this, you should use the case-insensitive option -i for grep:
find_keyword=$(cat $text_file | grep -w -i "$keyword_first" | grep -w -i "$keyword_second")
In a full script:
#!/bin/bash
#
# usage: ./myscript.sh myfile.txt "toast" "bread"
text_file=$1
keyword_first=$2
keyword_second=$3
find_keyword=$(cat $text_file | grep -w -i "$keyword_first" | grep -w -i "$keyword_second")
echo $find_keyword

Pass a list of files to sed to delete a line in them all

I am trying to do a one liner command that would delete the first line from a bunch of files. The list of files will be generated by grep command.
grep -l 'hsv,vcv,tro,ztk' ${OUTPUT_DIR}/*.csv | tr -s "\n" " " | xargs /usr/bin/sed -i '1d'
The problem is that sed can't see the list of files to act on.I'm not able to work out what is wrong with the command. Please can someone point me to my mistake.
Line numbers in sed are counted across all input files. So the address 1 only matches once per sed invocation.
In your example, only the first file in the list will get edited.
You can complete your task with loop such as this:
grep -l 'hsv,vcv,tro,ztk' "${OUTPUT_DIR}/"*.csv |
while IFS= read -r file; do
sed -i '1d' "$file"
done
This might work for you (GNU sed and grep):
grep -l 'hsv,vcv,tro,ztk' ${OUTPUT_DIR}/*.csv | xargs sed -i '1d'
The -l ouputs the file names which are received as arguments for xargs.
The -i edits in place the file and removes the first line of each file.
N.B. The -i option in sed works at a per file level, to use line numbers for each file within a stream use the -s option.
The only solution that worked for me is this apart from the one posted by Dan above -
for k in $(grep -l 'hsv,vcv,tro,ztk' ${OUTPUT_DIR}/*.csv | tr -s "\n" " ")
do
/usr/bin/sed -i '1d' "${k}"
done

How to grep all characters in file

I have a CSV file with this lines:
----------+79975532211,----------+79975532212
4995876655,4995876658
I try to grep this lines in Bash script
#!/bin/bash
config='/test/config.conf'
sourcecsv=/test/sourse.csv
cat $sourcecsv | while read line
do
Oldnumber=$(echo $line | cut -d',' -f1)
cat $config | grep "\\$Oldnumber" -B 8
done
But when script grep value 4995876655 I get error:
grep: Invalid back reference
How I can grep all values in my file?
Instead of:
cat $config | grep "\\$Oldnumber" -B 8
You should do:
grep -B 8 -F -- "$Oldnumber" "$config"
If you really mean to grep for all strings between commas, you can do it all in one go.
tr ',' '\n' </test/sourse.csv |
grep -F -f - -B 8 /test/config.conf
If you need to obtain the matches in sequence (all matches for the first string followed by all matches for the second, etc) then maybe loop over them with a proper while loop:
tr ',' '\n' </test/sourse.csv |
while read -r Oldnumber; do
grep -F -B 8 -e "$Oldnumber" /test/config.conf
done
Keeping the file names in variables does not seem to offer any advantage here.
If you mean to search for the strings preceded by a literal backslash, you can add it back; the -F option I added turns all strings into literals. If you need metacharacters, and take out the -F option, you need to double the backslashes (inside double quotes, a single backslash needs to be represented as double; and to get a literal backslash in a regular expression, you need two of them).

How to compose custom command-line argument from file lines?

I know about the xargs utility, which allows me to convert lines into multiple arguments, like this:
echo -e "a\nb\nc\n" | xargs
Results in:
a b c
But I want to get:
a:b:c
The character : is used for an example. I want to be able to insert any separator between lines to get a single argument. How can I do it?
If you have a file with multiple lines than you want to change to a single argument changing the NEWLINES by a single character, the paste command is what you need:
$ echo -en "a\nb\nc\n" | paste -s -d ":"
a:b:c
Then, your command becomes:
your_command "$(paste -s -d ":" your_file)"
EDIT:
If you want to insert more than a single character as a separator, you could use sed before paste:
your_command "$(sed -e '2,$s/^/<you_separator>/' your_file | paste -s -d "")"
Or use a single more complicated sed:
your_command "$(sed -n -e '1h;2,$H;${x;s/\n/<you_separator>/gp}' your_file)"
The example you gave is not working for me. You would need:
echo -e "a\nb\nc\n" | xargs
to get a b c.
Coming back to your need, you could do this:
echo "a b c" | awk 'OFS=":" {print $1, $2, $3}'
it will change the separator from space to : or whatever you want it to be.
You can also use sed:
echo "a b c" | sed -e 's/ /:/g
that will output a:b:c.
After all these data processing, you can use xargs to perform the command you want to. Just | xargs and do whatever you want.
Hope it helps.
You can join the lines using xargs and then replace the space(' ' ) using sed.
echo -e "a\nb\nc"|xargs| sed -e 's/ /:/g'
will result in
a:b:c
obviously you can use this output as argument for other command using another xargs.
echo -e "a\nb\nc"|xargs| sed -e 's/ /:/g'|xargs

How to pass output of grep to sed?

I have a command like this :
cat error | grep -o [0-9]
which is printing only numbers like 2,30 and so on. Now I wish to pass this number to sed.
Something like :
cat error | grep -o [0-9] | sed -n '$OutPutFromGrep,$OutPutFromGrepp'
Is it possible to do so?
I'm new to shell scripting. Thanks in advance
If the intention is to print the lines that grep returns, generating a sed script might be the way to go:
grep -E -o '[0-9]+' error | sed 's/$/p/' | sed -f - error
You are probably looking for xargs, particularly the -I option:
themel#eristoteles:~$ xargs -I FOO echo once FOO, twice FOO
hi
once hi, twice hi
there
once there, twice there
Your example:
themel#eristoteles:~$ cat error
error in line 123
error in line 234
errors in line 345 and 346
themel#eristoteles:~$ grep -o '[0-9]*' < error | xargs -I OutPutFromGrep echo sed -n 'OutPutFromGrep,OutPutFromGrepp'
sed -n 123,123p
sed -n 234,234p
sed -n 345,345p
sed -n 346,346p
For real-world use, you'll probably want to pass sed an input file and remove the echo.
(Fixed your UUOC, by the way. )
Yes you can pass output from grep to sed.
Please note that in order to match whole numbers you need to use [0-9]* not only [0-9] which would match only a single digit.
Also note you should use double quotes to get variables expanded(in the sed argument) and it seems you have a typo in the second variable name.
Hope this helps.

Resources