I have some variables:
$begin=10
$end=20
how to pass them to sed command.
sed -n '$begin,$endp' filename | grep word
sed -n '10,20p' filename | grep word
The reason this doesn't work is that single quotes in shell code prevent variable expansion. The good way is to use awk:
awk -v begin="$begin" -v end="$end" 'NR == begin, NR == end' filename
It is possible with sed if you use double quotes (in which shell variables are expanded):
sed -n "$begin,$end p" filename
However, this is subject to code injection vulnerabilities because sed cannot distinguish between code and data this way (unlike the awk code above). If a user manages to set, say, end="20 e rm -Rf /;", unpleasant things can happen.
Related
I run the command
df -gP /data1 /data2 | grep -v File | awk '{print $1}' |
awk -F/dev/ '$0=$2' | tr '\n' '
on the AIX shell (ksh) and it prints the output below:
lv_data01 lv_data02 root#testhost:/
However, I would like the output to be printed this way. Could someone help?
lv_data01 lv_data02
Using grep … | awk … | awk … is not necessary; a single awk could do the whole job. So could sed and it might even be easier. I'd be tempted to deal with the spacing by using:
x=$(df … | sed …); echo $x
The tr command, once corrected, replaces newlines with spaces, so the prompt follows without a newline before it. The ; echo suggestion adds the missing newline; the echo $x suggestion (note no double quotes) does too.
As for the sed command:
sed -n '/File/!{ s/[[:space:]].*//; s%^.*/dev/%%p; }'
Don't print anything by default
If the line doesn't match File (doing the work of grep -v):
remove the first space (blank or tab) and everything after it (doing the work of awk '{print $1}')
replace everything up to /dev/ with nothing and print (doing the work of awk -F/dev/ '{$0=$2}')
The command substitution and capture, followed by echo, deals with spaces and newlines.
So, my suggested solution is:
x=$(df -gP /data1 /data2 | sed -n '/File/!{ s/[[:space:]].*//; s%^.*/dev/%%p; }'); echo $x
You could add unset x after the echo if you are going to be using this directly in the shell and not in a shell script. If it'll be encapsulated in a shell script, you don't have to worry about it.
I'm blithely assuming the output from df -gP won't contain a path such as this, with two occurrences of /dev:
/who/knows/dev/lv_data01/dev/bin
If that's a real problem, you can fix the sed script, but I don't think it will be. It's one thing the second awk script in the question handles differently.
i have a bash command that produces a list of files for which i need to alter the filename.
so i was thinking of using something like this:
mycommand | awk {mv $1 altered$1}
the problem is that the second $1 should be altered replacing with sed some regular expressions.
how can i apply sed to the second parameter?
i tried with $() and |, but it does not work.
I also tried
awk '{print $1 sed "s/[^A-Za-z0-9._-]/_/g" <<< $1}'
awk: cmd. line:1: Unexpected token
mv is not an awk command. You need shell. Try:
mycommand | while IFS= read -r f; do mv "$f" "${f//[^A-Za-z0-9._-]/_}"; done
This assumes that the file names are newline-separated. This is OK unless a file name contains a newline as part of its name. For better reliability, mycommand and the while loop should be modified to use NUL as the separator.
How it works:
while IFS= read -r f; do
This starts a loop that reads each line, in turn, into variable f.
IFS= tells the shell to keep the leading or trailing whitespace on a line. If mycommand produces superfluous leading or trailing whitespace, then remove this.
-r tells the shell to keep backslashes in the input just as they are.
mv "$f" "${f//[^A-Za-z0-9._-]/_}"
This renames the file.
done
This signals the end of the while loop.
Is subshell accepted by you? If yes, a simple way you can do as followed:
mv `mycommand | awk '{print $1}'` {altered$1}
Use rename (always installed in Debian based distros, via the required util-linux package):
rename 's/^/altered/' $(mycommand)
I have a shell script that accepts a parameter that is comma delimited,
-s 1234,1244,1567
That is passed to a curl PUT json field. Json needs the values in a "1234","1244","1567" format.
Currently, I am passing the parameter with the quotes already in it:
-s "\"1234\",\"1244\",\"1567\"", which works, but the users are complaining that its too much typing and hard to do. So I'd like to just take a comma delimited list like I had at the top and programmatically stick the quotes in.
Basically, I want a parameter to be passed in as 1234,2345 and end up as a variable that is "1234","2345"
I've come to read that easiest approach here is to use sed, but I'm really not familiar with it and all of my efforts are failing.
You can do this in BASH:
$> arg='1234,1244,1567'
$> echo "\"${arg//,/\",\"}\""
"1234","1244","1567"
awk to the rescue!
$ awk -F, -v OFS='","' -v q='"' '{$1=$1; print q $0 q}' <<< "1234,1244,1567"
"1234","1244","1567"
or shorter with sed
$ sed -r 's/[^,]+/"&"/g' <<< "1234,1244,1567"
"1234","1244","1567"
translating this back to awk
$ awk '{print gensub(/([^,]+)/,"\"\\1\"","g")}' <<< "1234,1244,1567"
"1234","1244","1567"
you can use this:
echo QV=$(echo 1234,2345,56788 | sed -e 's/^/"/' -e 's/$/"/' -e 's/,/","/g')
result:
echo $QV
"1234","2345","56788"
just add double quotes at start, end, and replace commas with quote/comma/quote globally.
easy to do with sed
$ echo '1234,1244,1567' | sed 's/[0-9]*/"\0"/g'
"1234","1244","1567"
[0-9]* zero more consecutive digits, since * is greedy it will try to match as many as possible
"\0" double quote the matched pattern, entire match is by default saved in \0
g global flag, to replace all such patterns
In case, \0 isn't recognized in some sed versions, use & instead:
$ echo '1234,1244,1567' | sed 's/[0-9]*/"&"/g'
"1234","1244","1567"
Similar solution with perl
$ echo '1234,1244,1567' | perl -pe 's/\d+/"$&"/g'
"1234","1244","1567"
Note: Using * instead of + with perl will give
$ echo '1234,1244,1567' | perl -pe 's/\d*/"$&"/g'
"1234""","1244""","1567"""
""$
I think this difference between sed and perl is similar to this question: GNU sed, ^ and $ with | when first/last character matches
Using sed:
$ echo 1234,1244,1567 | sed 's/\([0-9]\+\)/\"\1\"/g'
"1234","1244","1567"
ie. replace all strings of numbers with the same strings of numbers quoted using backreferencing (\1).
I have a bash script which checks for a string pattern in file and delete entire line i same file but somehow its not deleting the line and no throwing any error .same command from command prompt deletes from file .
#array has patterns
for k in "${patternarr[#]}
do
sed -i '/$k/d' file.txt
done
sed version is >4
when this loop completes i want all lines matching string pattern in array to be deleted from file.txt
when i run sed -i '/pataern/d file.txt from command prompt then it works fine but not inside bash
Thanks in advance
Here:
sed -i '/$k/d' file.txt
The sed script is singly-quoted, which prevents shell variable expansion. It will (probably) work with
sed -i "/$k/d" file.txt
I say "probably" because what it will do depends on the contents of $k, which is just substituted into the sed code and interpreted as such. If $k contains slashes, it will break. If it comes from an untrustworthy source, you open yourself up to code injection (particularly with GNU sed, which can be made to execute shell commands).
Consider k=^/ s/^/rm -Rf \//e; #.
It is generally a bad idea to substitute shell variables into sed code (or any other code). A better way would be with GNU awk:
awk -i inplace -v pattern="$k" '!($0 ~ pattern)' file.txt
Or to just use grep -v and a temporary file.
first of all, you got an unclosed double quote around ${patternarr[#]} in your for statement.
Then your problem is that you use single quotes in the sed argument, making your shell not evaluate the $k within the quotes:
% declare -a patternarr=(foo bar fu foobar)
% for k in ${patternarr[#]}; do echo sed -i '/$k/d' file.txt; done
sed -i /$k/d file.txt
sed -i /$k/d file.txt
sed -i /$k/d file.txt
sed -i /$k/d file.txt
if you replace them with double quotes, here it goes:
% for k in ${patternarr[#]}; do echo sed -i "/$k/d" file.txt; done
sed -i /foo/d file.txt
sed -i /bar/d file.txt
sed -i /fu/d file.txt
sed -i /foobar/d file.txt
Any time you write a loop in shell just to manipulate text you have the wrong approach. This is probably closer to what you really should be doing (no surrounding loop required):
awk -v ks="${patternarr[#]}" 'BEGIN{gsub(/ /,")|(",ks); ks="("ks")} $0 !~ ks' file.txt
but there may be even better approaches still (e.g. only checking 1 field instead of the whole line, or using word boundaries, or string comparison or....) if you show us some sample input and expected output.
You need to use double quotes to interpolate shell variables inside the sed command, like:
for k in ${patternarr[#]}; do
sed -i "/$k/d" file.txt
done
I need to grep multiple strings, but i don't know the exact number of strings.
My code is :
s2=( $(echo $1 | awk -F"," '{ for (i=1; i<=NF ; i++) {print $i} }') )
for pattern in "${s2[#]}"; do
ssh -q host tail -f /some/path |
grep -w -i --line-buffered "$pattern" > some_file 2>/dev/null &
done
now, the code is not doing what it's supposed to do. For example if i run ./script s1,s2,s3,s4,.....
it prints all lines that contain s1,s2,s3....
The script is supposed to do something like grep "$s1" | grep "$s2" | grep "$s3" ....
grep doesn't have an option to match all of a set of patterns. So the best solution is to use another tool, such as awk (or your choice of scripting languages, but awk will work fine).
Note, however, that awk and grep have subtly different regular expression implementations. It's not clear from the question whether the target strings are literal strings or regular expression patterns, and if the latter, what the expectations are. However, since the argument comes delimited with commas, I'm assuming that the pieces are simple strings and should not be interpreted as patterns.
If you want the strings to be interpreted as patterns, you can change index to match in the following little program:
ssh -q host tail -f /some/path |
awk -v STRINGS="$1" -v IGNORECASE=1 \
'BEGIN{split(STRINGS,strings,/,/)}
{for(i in strings)if(!index($0,strings[i]))next}
{print;fflush()}'
Note:
IGNORECASE is only available in gnu awk; in (most) other implementations, it will do nothing. It seems that is what you want, based on the fact that you used -i in your grep invocation.
fflush() is also an extension, although it works with both gawk and mawk. In Posix awk, fflush requires an argument; if you were using Posix awk, you'd be better off printing to stderr.
You can use extended grep
egrep "$s1|$s2|$s3" fileName
If you don't know how many pattern you need to grep, but you have all of them in an array called s, you can use
egrep $(sed 's/ /|/g' <<< "${s[#]}") fileName
This creates a herestring with all elements of the array, sed replaces the field separator of bash (space) with | and if we feed that to egrep we grep all strings that are in the array s.
test.sh:
#!/bin/bash -x
a=" $#"
grep ${a// / -e } .bashrc
it works that way:
$ ./test.sh 1 2 3
+ a=' 1 2 3'
+ grep -e 1 -e 2 -e 3 .bashrc
(here is lots of text that fits all the arguments)