Add <br> to end of each lines in a file via bash - bash

I am trying to add "<br>" to the end of each line in a .log file, and create a HTML file of the results.
I have tried
sed 's/$/<br><br>/' latest.log >> latest.html
After 395 lines, it cuts out. I would just make the .log file a .html file, but the line breaks don't cross over. Sorry if any of this seems weird, I'm fairly new to this.

Well, hard to say bcaus it might be smth wrong with your input file (for example some unwanted white characters).
but you can insert it out the milion ways, the simplest one:
sed 's/.*/&<br><br>/'
do you need to explain it?

I'll just use tags at the beginning of the first line and the ending. Thank you, Walter A.

Related

How to handle improper Data Coming from CSV in Informatica

I have source file (CSV) and need to load into target (Oracle). But I got an error
FR_3065 ROW[4],Filed [Student_rollnumber]:Invalid Number:[.].The row will be skipped
CSV TABL
Student_rollnumber,Studnet_Name,Marks,Subjects
10,'Revanth',70,"Maths",
11,'Satish',85,Science
12,'Anil',75,"Java
",
13,'Surya',90,"C++",
14,'Ramana',85,"python",
15,'Sudheer'70,"Informatica
",
16,'Prakash',85,"SQL"
I found that in line number 4 the qouts and comma(",) are in the next line how to concat that both ("Java",) And make it single column(Subject)
MatchQuotesPastEndOfLine mentioned by Koushik should work.
Alternatively you may use sed with below pattern to replace newline+" with simply just a " - as a result removing the new line at the end of quoted string.
sed ':a;N;$!ba;s/\n"/"/g'
Feel free to test this gist.
This however will remove just the ending new line and will not help if it's anywhere in the middle. As said, the MatchQuotesPastEndOfLine mentioned by Koushik is the best possible solution.
Above has been based on this question.

Using grep/sed for any length patterns between two words

I have some directory called mydirectory with a bunch of text files containing the words 'SAVE' 'ME!' multiple times so I want it to print all the times for this specific pattern 'SAVE'|ANYTHING HERE, FOR ANY AMOUNT OF CHARACTERS|'ME'|Any non-zero amount of !s|'
To do this, I came up with sed -n '/SAVE/,/ME!\{1\}/p' mydirectory/* but this did not work, does anybody know how to do this? I can only use sed and grep for this.
File:
SAVE US OR JUST ME!!
BRAINSSSSSSSSSSSSS
SAVE US OR JUST ME
BRAINSSSSSSSSSSSSS
SAVE ME!
BRAINNNNNNNNNNNNS
SAVE ME
Desired Output
SAVE US OR JUST ME!!
SAVE ME!
$ grep -E '^SAVE.*ME!+$' file
output:
SAVE US OR JUST ME!!
SAVE ME!
anchors the pattern to the beginning and end, which I guess what you want.

Finding a newline in the csv file

I know there are a lot of questions about this (latest one here.), but almost all of them are how to join those broken lines into one from a csv file or remove them. I don't want to remove, but I just want to display/find that line (or probably the line number?)
Example data:
22224,across,some,text,0,,,4 etc
33448,more,text,1,,3,,,4 etc
abcde,text,number,444444,0,1,,,, etc
358890,more
,text,here,44,,,, etc
abcdefg,textds3,numberss,413,0,,,,, etc
985678,93838,text,,,,
,text,continuing,from,previous,line,,, etc
More search on this, and I know I shouldn't use bash to accomplish this, but rather shoud use perl. I tried (from various website, I don't know perl), but apparently I don't have the Text::CSV package and I don't have permission to install one.
As I told I have no idea how to even start looking for this, so I don't have any script. This is not a windows file, this is very much unix file so we can ignore the CR problem.
Desired output:
358890,more
,text,here,44,,,, etc
985678,93838,text,,,,
,text,continuing,from,previous,line,,, etc
or
Line 4: 358890,more
,text,here,44,,,, etc
Line 7: 985678,93838,text,,,,
,text,continuing,from,previous,line,,, etc
Much appreciated.
You can use perl to count the number of fields(commas), and append the next line until it reaches the correct number
perl -ne 'if(tr/,/,/<28){$line=$.;while(tr/,/,/<28){$_.=<>}print "Line $line: $_\n"}' file
I do love Perl but I don't think it is the best tool for this job.
If you want a report of all lines that DO NOT have exactly the correct number of commas/delimiters, you could use the unix language awk.
For example, this command:
/usr/bin/awk -F , 'NF != 8' < csv_file.txt
will print all lines that DO NOT have exactly 7 commas. Comma is specified as the Field with -F and the Number of Fields is specified with NF.

Replacing Middle Part of String Occurring Multiple Times

I have a file, that has variations of this line multiple times:
source = "git::https://github.com/ORGNAME/REPONAME.git?ref=develop"
I am passing through a tag name in a variable. I want to find every line that starts with source and update that line in the file to be
source = "git::https://github.com/ORGNAME/REPONAME.git?ref=$TAG"
This should be able to be done with awk and sed, but having some difficulty making it work. Any help would be much appreciated!
Best,
Keren
Edit: In this scenario, the it says "develop", but it could also be set to "feature/test1" or "0.0.1" as well.
Edit2: The line with "source" is also indented by three or four spaces.
This should do:
sed 's/^\([[:blank:]]*source.*[?]ref=\)[^"]*\("\)/\1'"$TAG"'\2/' file
with sed
$ sed '/^source/s/ref=develop"$/ref=$TAG"/' file
replace ref=develop" at the end of line with ref=$TAG" for lines starting with source.

use sed with for loop to delete lines from text file

I am essentially trying to use sed to remove a few lines within a text document. To clean it up. But I'm not getting it right at all. Missing something and I have no idea what...
#!/bin/bash
items[0]='X-Received:'
items[1]='Path:'
items[2]='NNTP-Posting-Date:'
items[3]='Organization:'
items[4]='MIME-Version:'
items[5]='References:'
items[6]='In-Reply-To:'
items[7]='Message-ID:'
items[8]='Lines:'
items[9]='X-Trace:'
items[10]='X-Complaints-To:'
items[11]='X-DMCA-Complaints-To:'
items[12]='X-Abuse-and-DMCA-Info:'
items[13]='X-Postfilter:'
items[14]='Bytes:'
items[15]='X-Original-Bytes:'
items[16]='Content-Type:'
items[17]='Content-Transfer-Encoding:'
items[18]='Xref:'
for f in "${items[#]}"; do
sed '/${f}/d' "$1"
done
What I am thinking, incorrectly it seems, is that I can setup a for loop to check each item in the array that I want removed from the text file. But it's simply not working. Any idea. Sure this is basic and simple and yet I can't figure it out.
Thanks,
Marek
Much better to create a single sed script, rather than generate 19 small scripts in sequence.
Fortunately, generating a script by joining the array elements is moderately easy in Bash:
regex=$(printf '\|%s' "${items[#]}")
regex=${regex#'\|'}
sed "/^$regex/d" "$1"
(Notice also the addition of ^ to the final regex -- I assume you only want to match at beginning of line.)
Properly, you should not delete any lines from the message body, so the script should leave anything after the first empty line alone:
sed "1,/^\$/!b;/$regex/d" "$1"
Add -i if you want in-place editing of the target file.

Resources