How to use :put to append a variable at the end of the current line with vim - bash

I have the following command, which I need to insert in a bash script:
vim file.txt -c ':let var=$foo' -c ':execute "normal! gg/string\<cr>V"' -c ":normal! d" -c ':execute "normal inewstring\<Esc>"' -c ':put =var'
What it does (or what I want it to do) is to use the variable foo, which is defined on the script, search for the first appearance of string select the whole line and delete it, then insert newstring and append the value of foo just after this new string. However, my code puts the value always in the next line, no matter if I change the x value in :[x]put.
As a novice in vim I'm not sure even if this way to achieve my goal is efficient, so any suggestion is welcome. Thanks in advance.
Let's say that we have this input file:
$ cat file.txt
blah
string foo
string foo
blah
What I'm expecting to obtain (defining $foo="hello") is:
$ cat file.txt
blah
newstringhello
string foo
blah

I am a big vim fan, but if I were you, I won't do it with vim.
Since you didn't post the example input the desired output, I can only guess what do you want from your description.
Given that we have:
kent$ cat f
blah
string foo bar
string foo bar
blah
And var="hello", the next sed one-liner changes the input file into:
kent$ sed "0,/string/{/string/s/.*/newString/};$ a \\$var" f
blah
newString
string foo bar
blah
hello
However I don't know if it is exactly what you wanted.
update
kent$ sed "0,/string/{/string/s/.*/newString$var/}" f
blah
newStringhello
string foo bar
blah

Related

Replace text in file if previous line matches another text

My file looks like this:
FooBarA
foo bar
foo = bar
FooBarB
foo bar
foo = bar
FooBarC
foo bar
foo = bar
...
What I would like to do is to write a script that replaces the bar in foo = bar but only if it belongs to FooBarB. So in the example above only the second bar out of all foo = bar lines should be replaced.
I've played around with sed but I just can't get it done right. I would also like to avoid installing any tools that aren't necessarily pre-installed on the system (I'm on Mac OS), since the script will be used by other team members too.
One way to do it with sed (tested using macOS's sed and GNU sed), would be this:
replace.sed
#!/usr/bin/env sed -Ef
/FooBarB/,/^FooBar/ {
s/(foo[[:space:]]*=[[:space:]]*).+/\1new-value/
}
Here's what it does:
/FooBarB/,/^FooBar/ matches a range of lines where the first line matches the regex /FooBarB/ and the last line matches the regex /^FooBar/ (which is the start of the next "group"). The comma between the two regexes is the syntax for range matching in sed.
s/(foo[[:space:]]*=[[:space:]]*).+/\1new-value/ — [s]ubstitutes (in the matched range of lines) whatever matches the regex (foo[[:space:]]*=[[:space:]]*).+ with \1new-value, where \1 references the first capturing group in the search regex. The search regex looks for foo followed by optional whitespace, followed by an = sign, followed again by whitespace and then whatever else is there, which in your case is the old value.
You could do it all in just one line, but I wanted to show a version that's a bit more digestible (as far as sed goes, in any case):
sed -E '/FooBarA/,/^FooBar/s/(foo[[:space:]]*=[[:space:]]*).+/\1new-value/' temp.md
This might work for you (GNU sed):
sed '/FooBarB/{:a;n;/^$/b;/foo = bar/!ba;s//foo = baz/}' file
Match on the string FooBarB and start a loop.
Fetch the next line and study it.
If the line is empty the stanza is done, so break out of the loop.
If the line does not contains the string foo = bar, fetch the next line and continue the loop.
Otherwise, substitute the new value for bar and finish the loop.
Alternative (which may work for macos users?):
sed -e '/FooBarB/{:a' -e 'n;/^$/b;/foo = bar/!ba;s//foo = baz/;}' file
Since the OP changed the input data to the question another solution:
sed '/FooBar/h;G;/FooBarB/s/foo = bar/foo = baz/;P;d' file
Using any awk in any shell on every Unix box:
$ awk -v tgt='FooBarB' -v val='whatever' '
NF==1{tag=$0} (NF>1) && (tag==tgt) && sub(/=.*/,"= "){$0=$0 val}
1' file
FooBarA
foo bar
foo = bar
FooBarB
foo bar
foo = whatever
FooBarC
foo bar
foo = bar
For reference, the GNU awk variant:
awk -v v="newvalue" 'BEGIN{FS=OFS="\n";RS=ORS="\n\n"}$1=="FooBarB"{$3="foo = " v}1' file
By using the option -v, the variable v holds the wanted string.
The BEGIN statement sets respectively the input, output field separator, the input and output record separator to one and two carriage return.
That way a record is composed of the block of several lines containing the pattern Foobar[ABC].
The last statement sets the new value by rewriting the third line.

Multiple "sed" actions on previous results

Have this input:
bar foo
foo ABC/DEF
BAR ABC
ABC foo DEF
foo bar
on the above I need do 4 (sequential) actions:
select only lines containing "foo" (lowercase)
on the selected lines, remove everything but UPPERCASE letters
delete empty lines (if some is created by the previous action)
and on the remaining from the above - enclose every char with [x]
I'm able to solve the above, but need two sed invocations piped together. Script:
#!/bin/bash
data() {
cat <<EOF
bar foo
foo ABC/DEF
BAR ABC
ABC foo DEF
foo bar
EOF
}
echo "Result OK"
data | sed -n '/foo/s/[^A-Z]//gp' | sed '/^\s*$/d;s/./[&]/g'
# in the above it is solved using 2 sed invocations
# trying to solve it using only one invocation,
# but the following doesn't do what i need.. :( :(
echo "Variant 2 - trying to use only ONE invocation of sed"
data | sed -n '/foo/s/[^A-Z]//g;/^\s*$/d;s/./[&]/gp'
output from the above:
Result OK
[A][B][C][D][E][F]
[A][B][C][D][E][F]
Variant 2 - trying to use only ONE invocation of sed
[A][B][C][D][E][F]
[B][A][R][ ][A][B][C]
[A][B][C][D][E][F]
The variant 2 should be also only
[A][B][C][D][E][F]
[A][B][C][D][E][F]
It is possible to solve the above using only by one sed invocation?
sed -n '/foo/{s/[^A-Z]//g;/^$/d;s/./[&]/g;p;}' inputfile
Output:
[A][B][C][D][E][F]
[A][B][C][D][E][F]
Alternative sed approach:
sed '/foo/!d;s/[^A-Z]//g;/./!d;s/./[&]/g' file
The output:
[A][B][C][D][E][F]
[A][B][C][D][E][F]
/foo/!d - deletes all lines that don't contain foo
/./!d - deletes all empty lines

Display Unique Shell Columns

Given we have two formatted strings that are unrelated to each other.
#test.rb
string_1 = "Title\nfoo bar\nbaz\nfoo bar baz boo"
string_2 = "Unrelated Title\ndog cat farm\nspace moon"
How can I use ruby or call shell commands to have each of these string display as columns in terminal? The key is that the data of each string are not building a correlated row, ie this is not a table, rather 2 lists side by side.
Title Unrelated Title
foo bar dog cat farm
baz space moon
foo bar baz boo
You can try using paste and column command together. Note that this is a shell command so spaces between the assignment operator should be corrected.
$ string_1="Title\nfoo bar\nbaz\nfoo bar baz boo"
$ string_2="Unrelated Title\ndog cat farm\nspace moon"
$ paste -d '|' <(echo -e "$string_1") <(echo -e "$string_2") | column -s'|' -t
Title Unrelated Title
foo bar dog cat farm
baz space moon
foo bar baz boo
We paste the lines with | as delimiter and tell column command to use | as a reference to form columns.
In Ruby, you could do it this way:
#!/usr/bin/env ruby
string_1 = "Title\nfoo bar\nbaz\nfoo bar baz boo"
string_2 = "Unrelated Title\ndog cat farm\nspace moon"
a1 = string_1.split("\n")
a2 = string_2.split("\n")
a1.zip(a2).each { |pair| puts "%-20s%s" % [pair.first, pair.last] }
# or
# a1.zip(a2).each { |left, right| puts "%-20s%s" % [left, right] }
This produces:
Title Unrelated Title
foo bar dog cat farm
baz space moon
foo bar baz boo
Hi , If you Use temp files
string_1 = "Title\nfoo bar\nbaz\nfoo bar baz boo"
string_2 = "Unrelated Title\ndog cat farm\nspace moon"
echo -e $string_1 >a.txt
echo -e $string_2 >b.txt
paste a.txt b.txt
I hope it will help.

Split text file basing on date tag / timestamp

I have big log file containing date tags. It looks like this:
[01/11/2015, 02:19]
foo
[01/11/2015, 08:40]
bar
[04/11/2015, 12:21]
foo
bar
[08/11/2015, 14:12]
bar
foo
[09/11/2015, 11:25]
...
[15/11/2015, 19:22]
...
[15/11/2015, 21:55]
...
and so on. I need to split these data into files of days, like:
01.txt:
[01/11/2015, 02:19]
foo
[01/11/2015, 08:40]
bar
04.txt:
[04/11/2015, 12:21]
foo
bar
etc. How can I do that using any of unix tools?
I don't think there's a tool that will do it without a little programming, but with Awk the little programming really isn't all that hard.
script.awk
/^\[[0-3][0-9]\/[01][0-9]\/[12][0-9]{3},/ {
if ($1 != old_date)
{
if (outfile != "") close(outfile);
outfile = sprintf("%.2d.txt", ++filenum);
old_date = $1
}
}
{ print > outfile }
The first (bigger) block of code recognizes the date string, which is also in $1 (so the condition could be made more precise by referring to $1, but the benefit it minimal to non-existent). Inside the actions, it checks to see if the date is different from the last date it remembered. If so, it checks whether it has a file open and closes it if necessary (close is part of POSIX awk). Then it generates a new file name, and remembers the current date it is processing.
The second smaller block simply writes the current line to the current file.
Invocation
awk -f script.awk data
This assumes you have a file script.awk; you could provide it as a script argument if you prefer. If the whole is encapsulated in a shell script, I'd use an expression rather than a second file, but I find it convenient for development to use a file. (The shell script would contain awk '…the script…' "$#" with no separate file.)
Example output files
Given the sample data from the question, the output is in five files, 01.txt .. 05.txt.
$ for file in 0?.txt; do boxecho $file; cat $file; done
************
** 01.txt **
************
[01/11/2015, 02:19]
foo
[01/11/2015, 08:40]
bar
************
** 02.txt **
************
[04/11/2015, 12:21]
foo
bar
************
** 03.txt **
************
[08/11/2015, 14:12]
bar
foo
************
** 04.txt **
************
[09/11/2015, 11:25]
...
************
** 05.txt **
************
[15/11/2015, 19:22]
...
[15/11/2015, 21:55]
...
$
The boxecho command is a simple script that echoes its arguments in a box of stars:
echo "** $* **" | sed -e h -e s/./*/g -e p -e x -e p -e x
Revised file name format
I wish have output as a [day].txt or [day].[month].[year].txt, based on date in file. Is that possible?
Yes; it is possible and not particularly hard. The split function is one way of dealing with breaking up the value in $1. The regex specifies that square brackets, slashes and commas are the field separators. There are 5 sub-fields in the value in $1: an empty field before the [, the three numeric components separated by slashes and an empty field after the ,. The array name, dmy, is mnemonic for the sequence in which the components are stored.
/^\[[0-3][0-9]\/[01][0-9]\/[12][0-9]{3},/ {
if ($1 != old_date)
{
if (outfile != "") close(outfile)
n = split($1, dmy, "[/\[,]")
outfile = sprintf("%s.%s.%s.txt", dmy[4], dmy[3], dmy[2])
old_date = $1
}
}
{ print > outfile }
Permute the numbers 4, 3, 2 in the sprintf() statement to suit yourself. The given order is year, month, day, which has many merits including that it is exploiting the ISO 8601 standard and the files sort automatically into date order. I strongly counsel its use, but you may do as you wish. For the sample data and the input shown in the question, the files it generates are:
2015.11.01.txt
2015.11.04.txt
2015.11.08.txt
2015.11.09.txt
2015.11.15.txt
This is my idea. I use sed command and awk script.
$ cat biglog
[01/11/2015, 02:19]
foo
[01/11/2015, 08:40]
bar
[04/11/2015, 12:21]
foo
bar
aaa
bbb
[08/11/2015, 14:12]
bar
foo
$ cat sample.awk
#!/bin/awk -f
BEGIN {
FS = "\n"
RS = "\n\n"
}
{
date = substr($1, 2, 2)
filename = date ".txt"
for (i = 2; i <= NF; i++) {
print $i >> filename
}
}
How to use
sed -e 's/^\(\[[0-9][0-9]\)/\n\1/' biglog | sed -e 1d | ./sample.awk
Confirmation
ls *.txt
01.txt 04.txt 08.txt
$ cat 01.txt
foo
bar
$ cat 04.txt
foo
bar
aaa
bbb
$ cat 08.txt
bar
foo
yet another awk
$ awk -F"[[/,]" -v d="." '/^[\[0-9\/, :\]]*$/{f=$4 d $3 d $2 d"txt"}
{print $0>f}' file
$ ls 20*
2015.11.01.txt 2015.11.04.txt 2015.11.08.txt 2015.11.09.txt 2015.11.15.txt
$ cat 2015.11.01.txt
[01/11/2015, 02:19]
foo
[01/11/2015, 08:40]
bar

How do I write one-liner script that inserts the contents of one file to another file?

Say I have file A, in middle of which have a tag string "#INSERT_HERE#". I want to put the whole content of file B to that position of file A. I tried using pipe to concatenate those contents, but I wonder if there is more advanced one-line script to handle it.
$ cat file
one
two
#INSERT_HERE#
three
four
$ cat file_to_insert
foo bar
bar foo
$ awk '/#INSERT_HERE#/{while((getline line<"file_to_insert")>0){ print line };next }1 ' file
one
two
foo bar
bar foo
three
four
cat file | while read line; do if [ "$line" = "#INSERT_HERE#" ]; then cat file_to_insert; else echo $line; fi; done
Use sed's r command:
$ cat foo
one
two
#INSERT_HERE#
three
four
$ cat bar
foo bar
bar foo
$ sed '/#INSERT_HERE#/{ r bar
> d
> }' foo
one
two
foo bar
bar foo
three
four

Resources