Make oneliner of sed-command - bash

After a lot of trying the past day, I can't make following command work on 1 line:
sed '/'"$var1"'/ {n;n;a '\'"$var2"\'' \\
}' tempproject.cfg
when i run this like above, it matches $var1 and replaces the 3rd line after it with $var2.
example of what the sed command should do:
var1=c
var2=hello
a a
b b
c c
d => sed '/'"$var1"'/ {n;n;a '\'"$var2"\'' \\ => d
e }' tempproject.cfg e
f 'hello' \
g g
h h
when i put the command on 1 line i get the following error:
sed: -e expression #1, char 0: unmatched `{'
Thanks in advance!

$var1=c
$var2=hello
$sed "/$var1/{n;n;n;s/.*/'$var2' \\\ /}" tempproject.cfg
should give you
a
b
c
d
e
'hello' \
g
h
i
Why use three backslash?

Related

How to iterate string with blank?

I have next code:
line="95:p1=a b c 95:p2=d e 96:p1=a b c 96:p2=d e"
for l in $line; do
echo $l
done
I got next:
95:p1=a
b
c
95:p2=d
e
96:p1=a
b
c
96:p2=d
e
But in fact a b c is a whole string in my business, so if possible I could get next with some ways?
95:p1=a b c
95:p2=d e
96:p1=a b c
96:p2=d e
1st solution: With your shown samples and attempts please try following awk code. Written and tested with GNU awk.
Here is the Online demo for used regex.
echo "$line"
95:p1=a b c 95:p2=d e 96:p1=a b c 96:p2=d e
awk -v RS='[0-9]{2}:p[0-9]=[a-zA-Z] [a-zA-Z]( [a-zA-Z]|$)*' 'RT{print RT}' <<<"$line"
Output with shown samples will be as follows:
95:p1=a b c
95:p2=d e
96:p1=a b c
96:p2=d e
2nd solution: With any POSIX awk please try following awk code:
awk '
{
while(match($0,/[0-9]{2}:p[0-9]=[a-zA-Z] [a-zA-Z]( [a-zA-Z]|$)*/)){
print substr($0,RSTART,RLENGTH)
$0=substr($0,RSTART+RLENGTH)
}
}
' <<<"$line"
With bash
read -ra f <<<"$line" # split the string into words
n=${#f[#]}
i=0
lines=()
while ((i < n)); do
l=${f[i++]}
until ((i == n)) || [[ ${f[i]} =~ ^[0-9]+: ]]; do
l+=" ${f[i++]}"
done
lines+=( "$l" )
done
declare -p lines
outputs
declare -a lines=([0]="95:p1=a b c" [1]="95:p2=d e" [2]="96:p1=a b c" [3]="96:p2=d e")
Now you can do
for l in "${lines[#]}"; do
do_something_with "$l"
done
Or sed, and capture the lines with bash builtin mapfile
mapfile -t lines < <(sed -E 's/ ([0-9]+:)/\n\1/g' <<< "$line")
You can't do this with regular parameters. If you want a collection of strings that can contain whitespace, use an array.
line=("95:p1=a b c" "95:p2=d e" "96:p1=a b c" "96:p2=d e")
for l in "${line[#]}"; do
echo "$l"
done
Otherwise, you'll need some way of distinguishing between "literal" spaces and "delimiter" spaces. (Maybe the latter is followed by <num>:, but that logic is not trivial to implement using bash regular expressions. You would probably be better off using a more capable language instead of trying to do this in bash.)
echo "${line}" |
mawk 'BEGIN { FS=RS="^$"(ORS="") } gsub(" [^ :]+:","\1&") + gsub("\1.","\n")^_'
95:p1=a b c
95:p2=d e
96:p1=a b c
96:p2=d e
If your grep supports -P (PCRE) option, would you please try:
grep -Po "\d+:.*?(?=(?:\s*\d+:|$))" <<< "$line"
Output:
95:p1=a b c
95:p2=d e
96:p1=a b c
96:p2=d e
Explanation of the regex \d+:.*?(?=(?:\s*\d+:|$)):
\d+: matches digits followed by a colon. It will match 95: or 96:.
.*?(?=pattern) matches the shortest sequence of characters
followd by the pattern. (?=pattern) is a lookahead assertion
which is not included in the mathed result.
The pattern above is described as (?:\s*\d+:|$), an alternation of
digits followed by a colon or end of the string. The former
matches the starting portion of the next item. The \s* before
\d+ matches a zero or more space character(s) which trims the
whitespace(s) from the matched result.
If you want to iterate over the divided substrings, you can say:
while IFS= read -r i; do
echo "$i" # or whatever you want to do with "$i"
done < <(grep -Po "\d+:.*?(?=(?:\s*\d+:|$))" <<< "$line")

How to add an empty line if grep reports multiple instances of the same pattern?

Ubuntu 16.04
Bash 4.4.0
I am using grep to search for the word 'error' in a json file which is a logfile. How can an empty line be added after each instance?
my command: grep error "${wDir}"/"${client}"/logs/server.json >> "$eLog"
the output:
{"name":"XXX_XXX","hostname":"xxx.xx.xxx","pid":5193,"level":30,"fbresponse":{"error":{"message":"(#200) User does not ....."}}}
{"name":"XXX_XXX","hostname":"xxx.xx.xxx","pid":5193,"level":30,"fbresponseraw":{"error":{"message":"(#200) User does not ..."}}}
{"name":"XXX_XXX","hostname":"xxx.xx.xxx","pid":5193,"level":30,"fbresponse":{"error":{"message":"(#200) User does not ....."}}}
{"name":"XXX_XXX","hostname":"xxx.xx.xxx","pid":5193,"level":30,"fbresponseraw":{"error":{"message":"(#200) User does not ..."}}}
The desired output:
{"name":"XXX_XXX","hostname":"xxx.xx.xxx","pid":5193,"level":30,"fbresponse":{"error":{"message":"(#200) User does not ....."}}}
{"name":"XXX_XXX","hostname":"xxx.xx.xxx","pid":5193,"level":30,"fbresponseraw":{"error":{"message":"(#200) User does not ..."}}}
{"name":"XXX_XXX","hostname":"xxx.xx.xxx","pid":5193,"level":30,"fbresponse":{"error":{"message":"(#200) User does not ....."}}}
{"name":"XXX_XXX","hostname":"xxx.xx.xxx","pid":5193,"level":30,"fbresponseraw":{"error":{"message":"(#200) User does not ..."}}}
You may use awk for search and insert an empty line:
awk '/error/ { print $0 ORS }' "${wDir}"/"${client}"/logs/server.json
By default ORS (output record separator) is \n.
Simple is good.
sed '/error/G' "${wDir}"/"${client}"/logs/server.json >> "$eLog"
or if you want it to be case-insensitive
sed '/error/IG' "${wDir}"/"${client}"/logs/server.json >> "$eLog"
examples:
$: cat x
a
error
b
c
foo error other stuff
d
e
foo other stuff ERROR ERROR
f
g
$: sed '/error/G' x
a
error
b
c
foo error other stuff
d
e
foo other stuff ERROR ERROR
f
g
$: sed '/error/IG' x
a
error
b
c
foo error other stuff
d
e
foo other stuff ERROR ERROR
f
g
With grep:
grep "aa" a.txt | xargs printf "%s\n\n"
output:
aa
aa
aa
aa

How to line wrap output in bash?

I have a command which outputs in this format:
A
B
C
D
E
F
G
I
J
etc
I want the output to be in this format
A B C D E F G I J
I tried using ./script | tr "\n" " " but all it does is remove n from the output
How do I get all the output in one line. (Line wrapped)
Edit: I accidentally put in grep while asking the question. I removed
it. My original question still stands.
The grep is superfluous.
This should work:
./script | tr '\n' ' '
It did for me with a command al that lists its arguments one per line:
$ al A B C D E F G H I J
A
B
C
D
E
F
G
H
I
J
$ al A B C D E F G H I J | tr '\n' ' '
A B C D E F G H I J $
As Jonathan Leffler points out, you don't want the grep. The command you're using:
./script | grep tr "\n" " "
doesn't even invoke the tr command; it should search for the pattern "tr" in files named "\n" and " ". Since that's not the output you reported, I suspect you've mistyped the command you're using.
You can do this:
./script | tr '\n' ' '
but (a) it joins all its input into a single line, and (b) it doesn't append a newline to the end of the line. Typically that means your shell prompt will be printed at the end of the line of output.
If you want everything on one line, you can do this:
./script | tr '\n' ' ' ; echo ''
Or, if you want the output wrapped to a reasonable width:
./script | fmt
The fmt command has a number of options to control things like the maximum line length; read its documentation (man fmt or info fmt) for details.
No need to use other programs, why not use Bash to do the job? (-- added in edit)
line=$(./script.sh)
set -- $line
echo "$*"
The set sets command-line options, and one of the (by default) seperators is a "\n". EDIT: This will overwrite any existing command-line arguments, but good coding practice would suggest that you reassigned these to named variables early in the script.
When we use "$*" (note the quotes) it joins them alll together again using the first character of IFS as the glue. By default that is a space.
tr is an unnecessary child process.
By the way, there is a command called script, so be careful of using that name.
If I'm not mistaken, the echo command will automatically remove the newline chars when its argument is given unquoted:
tmp=$(./script.sh)
echo $tmp
results in
A B C D E F G H I J
whereas
tmp=$(./script.sh)
echo "$tmp"
results in
A
B
C
D
E
F
G
H
I
J
If needed, you can re-assign the output of the echo command to another variable:
tmp=$(./script.sh)
tmp2=$(echo $tmp)
The $tmp2 variable will then contain no newlines.

Filter input to remove certain characters/strings

I have quick question about text parsing, for example:
INPUT="a b c d e f g"
PATTERN="a e g"
INPUT variable should be modified so that PATTERN characters should be removed, so in this example:
OUTPUT="b c d f"
I've tried to use tr -d $x in a for loop counting by 'PATTERN' but I don't know how to pass output for the next loop iteration.
edit:
How if a INPUT and PATTERN variables contain strings instead of single characters???
Where does $x come from? Anyway, you were close:
tr -d "$PATTERN" <<< $INPUT
To assign the result to a variable, just use
OUTPUT=$(tr -d "$PATTERN" <<< $INPUT)
Just note that spaces will be removed, too, because they are part of the $PATTERN.
Pure Bash using parameter substitution:
INPUT="a b c d e f g"
PATTERN="a e g"
for p in $PATTERN; do
INPUT=${INPUT/ $p/}
INPUT=${INPUT/$p /}
done
echo "'$INPUT'"
Result:
'b c d f'

Reading a subset of the lines in a text file, with bash

I have a file
line a - this is line a
line b - this is line b
line c - this is line c
line d - this is line d
line e - this is line e
The question is: How can I output the lines starting from "line b" till "line d" using bash commands?
I mean, to obtain:
"line b - this is line b
line c - this is line c
line d - this is line d"
sed -n '/line b/,/line d/p' file
Your example is not enough to infer what you want in the general case, but assuming you want to remove the first and last line, you can simply use
tail -n+2 $filename | head -n-1
Here tail -n+2 prints all the lines starting from the second, and head -n-1 prints all the lines except the last.
for your set of sample data:
awk '/line b/,/line d/' file
Or
awk '/line d/{f=0;print}/line b/{f=1}f' file
If by bash, you mean actually bash alone, I can't help you. You really should be using the right tools for the job. If you mean standard UNIX utilities that you can call from bash, I would be using awk for that.
echo 'line a - this is line a
line b - this is line b
line c - this is line c
line d - this is line d
line e - this is line e' | awk '
BEGIN {e=0}
/^line b/ {e=1}
/^line d/ {if (e==1) {print;exit}}
{if (e==1) print}
'
This outputs:
line b - this is line b
line c - this is line c
line d - this is line d
The way it works is simple.
e is the echo flag, initially set to false (0).
when you find line b, set echo to true (1) - don't print yet. That will be handled by the last bullet point below.
when you find line d and echo is on, print it and exit.
when echo is on, print the line (this includes line b).
I've made an assumption here that you don't want to exit on a line d unless you're already echoing. If that's wrong, move the exit outside of the if statement for line d:
/^line d/ {if (e==1) print;exit}
Then, if you get a line d before your line b, it will just exit without echoing anything.
The "/^line X/"-type clauses can be made very powerful to match pretty well anything you can throw at it.
You can do it using bash alone, though I agree with Pax that using other tools is probably a better solution. Here's a bash-only solution:
while read line
do
t=${line#line b}
if test "$t" != "$line"
then
echo $line
while read line
do
echo $line
t=${line#line d}
if test "$t" != "$line"
then
exit 0
fi
done
fi
done
Another approach which depends on what you mean:
pcregrep -m 'line b - this is line b
line c - this is line c
line d - this is line d' file

Resources