replace any line that starts with the symbol # using sed, awk, cut - bash

this is simple but I was hoping for a quick command (using sed, cut, awk or something in BASH preferably) to do this:
replace any line that starts with the symbol #:
#<text, on one line, including numbers, letters and colons>
with
#<text, on one line, including numbers, letters and colons>/1
The # is always consistent, the <text, on one line, including numbers, letters and colons> changes. (It's Fastq format for the bioinformaticians out there).
Example:
#HWI-D00193:58:H73UEADXX:1:1101:1516:2209 1:N:0:ATCACG
change to
#HWI-D00193:58:H73UEADXX:1:1101:1516:2209 1:N:0:ATCACG/1
I know this is simple sorry.

With sed, you can do as below:
sed "/^#/ s/$/\/1/g" file
This matches lines that start with # and then appends (substitution at the end to be precise) the /1 on all the matching lines.

Using awk
awk '/^#/ {$0=$0"/1"}1' file

Related

Using shell scripts to remove all commas except for the first on each line

I have a text file consisting of lines which all begin with a numerical code, followed by one or several words, a comma, and then a list of words separated by commas. I need to delete all commas in every line apart from the first comma. For example:
1.2.3 Example question, a, question, that, is, hopefully, not, too, rudimentary
which should be changed to
1.2.3 Example question, a question that is hopefully not too rudimentary
I have tried using sed and shell scripts to solve this, and I can figure out how to delete the first comma on each line (1) and how to delete all commas (2), but not how to delete only the commas after the first comma on each line
(1)
while read -r line
do
echo "${line/,/}"
done <"filename.txt" > newfile.txt
mv newfile.txt filename.txt
(2)
sed 's/,//g' filename.txt > newfile.txt
You need to capture the first comma, and then remove the others. One option is to change the first comma into some otherwise unused character (Control-A for example), then remove the remaining commas, and finally replace the replacement character with a comma:
sed -e $'s/,/\001/; s/,//g; s/\001/,/'
(using Bash ANSI C quoting — the \001 maps to Control-A).
An alternative mechanism uses sed's labels and branches, as illustrated by Wiktor Stribiżew's answer.
If using GNU sed, you can specify a number in the flags of sed's s/// command along with g to indicate which match to start replacing at:
$ sed 's/,//2g' <<<'1.2.3 Example question, a, question, that, is, hopefully, not, too, rudimentary'
1.2.3 Example question, a question that is hopefully not too rudimentary
Its manual says:
Note: the POSIX standard does not specify what should happen when you mix the g and NUMBER modifiers, and currently there is no widely agreed upon meaning across sed implementations. For GNU sed, the interaction is defined to be: ignore matches before the NUMBERth, and then match and replace all matches from the NUMBERth on.
so if you're using a different sed, your mileage may vary. (OpenBSD and NetBSD seds raise an error instead, for example).
You can use
sed ':a; s/^\([^,]*,[^,]*\),/\1/;ta' filename.txt > newfile.txt
Details
:a - sets an a label
s/^\([^,]*,[^,]*\),/\1/ - finds 0+ non-commas at the start of string, a comma and again 0+ non-commas, capturing this substring into Group 1, and then just matching a , and replacing the match with the contents of Group 1 (removes the non-first comma)
ta - upon a successful replacement, jumps back to the a label location.
See an online sed demo:
s='1.2.3 Example question, a, question, that, is, hopefully, not, too, rudimentary'
sed ':a; s/^\([^,]*,[^,]*\),/\1/;ta' <<< "$s"
# => 1.2.3 Example question, a question that is hopefully not too rudimentary
awk 'NF>1 {$1=$1","} 1' FS=, OFS= filename.txt
sed ':a;s/,//2;t a' filename.txt
sed 's/,/\
/;s/,//g;y/\n/,/' filename.txt
This might work for you (GNU sed):
sed 's/,/&\n/;h;s/,//g;H;g;s/\n.*\n//' file
Append a newline to the first comma.
Copy the current line to the hold space.
Remove all commas in the current line.
Append the current line to the hold space.
Swap the current line for the hold space.
Remove everything between the introduced newlines.

Sed substitution places characters after back reference at beginning of line

I have a text file that I am trying to convert to a Latex file for printing. One of the first steps is to go through and change lines that look like:
Book 01 Introduction
To look like:
\chapter{Introduction}
To this end, I have devised a very simple sed script:
sed -n -e 's/Book [[:digit:]]\{2\}\s*(.*)/\\chapter{\1}/p'
This does the job, except, the closing curly bracket is placed where the initial backslash should be in the substituted output. Like so:
}chapter{Introduction
Any ideas as to why this is the case?
Your call to sed is fine; the problem is that your file uses DOS line endings (CRLF), but sed does not recognize the CR as part of the line ending, but as just another character on the line. The string Introduction\r is captured, and the result \chapter{Introduction\r} is printed by printing everything up to the carriage return (the ^ represents the cursor position)
\chapter{Introduction
^
then moving the cursor to the beginning of the line
\chapter{Introduction
^
then printing the rest of the result (}) over what has already been printed
}chapter{Introduction
^
The solution is to either fix the file to use standard POSIX line endings (linefeed only), or to modify your regular expression to not capture the carriage return at the end of the line.
sed -n -e 's/Book [[:digit:]]\{2\}\s*(.*)\r?$/\\chapter{\1}/p'
As an alternative to sed, awk using gsub might work well in this situation:
awk '{gsub(/Book [0-9]+/,"\\chapter"); print $1"{"$2"}"}'
Result:
\chapter{Introduction}
A solution is to modify the capture group. In this case, since all book chapter names consist only of alphabetic characters I was able to use [[:alpha:]]*. This gave a revised sed script of:
sed -n -e 's/Book [[:digit:]]\{2\}\s*\([[:alpha:]]*\)/\\chapter{\1}/p'.

Use sed to count periods, commas, and numbers?

I have a file that looks like this:
19.217.179.33,175.176.12.8
253.149.205.57,174.210.221.195
222.118.178.218,255.99.100.202
241.55.199.243,167.98.204.104
38.224.198.117,21.11.184.68
Each line is 2 IP addresses, separated by a comma. So, each line should meet these requirements:
Has 1 comma.
Has 6 periods.
Has ONLY numbers, commas, and periods.
If a line is missing a period, has more/less than one commas, has a letter, is blank, or anything like that - it isn't correct. Basically I just want to use sed or something similar to loop through each line in the file and make sure each of them meets the above requirements.
Is this something that can be done with sed? I know you can use it to delete files that do/don't have matching strings, but I wasn't sure about counting specific characters or verifying that a line only has certain characters.
Any help would be greatly appreciated. Thanks!
I think grep is a better tool for this. You just want to ensure that each line matches a particular regex, so invert the grep with -v and label the input invalid if any line gets output. Something like:
grep -qvE '^([0-9]{1,3}\.){3}[0-9]{1,3},([0-9]{1,3}\.){3}[0-9]{1,3}$' input || echo input is valid
You can simplify that a bit:
IP='([0-9]{1,3}\.){3}[0-9]{1,3}'
grep -qvE "^$IP,$IP$" input || echo input is valid
Or if you are more interested in invalid data:
grep -qvE "^$IP,$IP$" input && echo input is invalid
What I'd do is to think up a regular expression that fits the 'proper' lines, and omits them from printing. Like this:
sed -r '/^([0-9]{1,3}\.){3}[0-9]{1,3},([0-9]{1,3}\.){3}[0-9]{1,3}$/d' file
Everything that remains is a wrong line.
Here's the recipe in more detail:
[0-9]{1,3} between one and three digits
\. literal period (just the period is a wildcard and matches any character)
(...){3} three repetitions of something, so together
([0-9]{1,3}\.){3}[0-9]{1,3} makes up something that looks like an IP address. (Though note that it doesn't enforce the <256 rule, so 999.999.999.999 matches.)
/^ ... $/ the match needs to start at the beginning of the line and run until its end.
'/ ... /d' print everything except lines that match what's inside the two slashes
-r is needed to recognise the {1,3} syntax.
This will find and print the lines that are wrong. If you want to delete the wrong lines, you can easily invert this:
sed -i.bak -n -r '/^([0-9]{1,3}\.){3}[0-9]{1,3},([0-9]{1,3}\.){3}[0-9]{1,3}$/p' file
-i.bak means keep a backup, but overwrite the input file
-n means don't output anything unless expressly directed to output, and
/ ... /p output all the lines that match this regex.
If you would like to display only information about file contents correctness , you can use this command:
sed -n -r '/^([0-9]{1,3}\.){3}[0-9]{1,3},([0-9]{1,3}\.){3}[0-9]{1,3}$/!{a \
FILE IS INCORRECT
;q;};$aFILE IS OK'
It's modified version of #chw21 answer, but displays only information text:
FILE IS INCORRECT, or
FILE IS OK.

How can I add a comma at the end of every line except the last line?

I want add a comma at the end of every line of this kind of file except the last line:
I have this now:
{...}
{...}
{...}
{...}
I want this:
{...},
{...},
{...},
{...}
How can I do it with the command line? Is it possible with sed command?
The following sed script has an address expression $! which matches lines which are not the last, and a substituton action s/$/,/ which adds a comma next to the end of line.
sed '$!s/$/,/' file
(The $ in the address refers to the last line while the $ in the regular expression in the substitution refers to the last character position on every line. They are unrelated, though in some sense similar.)
This prints the modified contents of file to standard output; redirect to a different file to save them to a file, or use sed -i if your sed supports that. (Some variants require an empty argument to the -i option, notably *BSD / OSX sed.)
If your task is to generate valid JSON from something which is not, I'm skeptical of this approach. Structured formats should be manipulated with tools which understand structured formats, not basic line-oriented tools; but if this helps you transform line-oriented data towards proper JSON, that's probably a valid use.
... As an afterthought, maybe you want to wrap the output in a set of square brackets, too. Then it's actually technically valid JSON (assuming each line is a valid JSON fragment on its own).
sed '1s/^/[/;$!s/$/,/;$s/$/]/' file
On the first line (address expression 1) substitute in an opening square bracket at beginning of line (s/^/[/). On lines which are not the last, add a trailing comma, as above. On the line which is the last (address expression $) add a closing square bracket at the end of line (s/$/]/). If you prefer newlines to semicolons, that's fine; many sed dialects also allow you to split this into multiple -e arguments.
vim <file-name>
:%s/$/,/g
To remove
:%s/,//g
Keep it simple, just use awk:
$ awk '{printf "%s%s",sep,$0; sep=",\n"} END{print ""}' file
{...},
{...},
{...},
{...}

use sed to merge lines and add comma

I found several related questions, but none of them fits what I need, and since I am a real beginner, I can't figure it out.
I have a text file with entries like this, separated by a blank line:
example entry &with/ special characters
next line (any characters)
next %*entry
more words
I would like the output merge the lines, put a comma between, and delete empty lines. I.e., the example should look like this:
example entry &with/ special characters, next line (any characters)
next %*entry, more words
I would prefer sed, because I know it a little bit, but am also happy about any other solution on the linux command line.
Improved per Kent's elegant suggestion:
awk 'BEGIN{RS="";FS="\n";OFS=","}{$1=$1}7' file
which allows any number of lines per block, rather than the 2 rigid lines per block I had. Thank you, Kent. Note: The 7 is Kent's trademark... any non-zero expression will cause awk to print the entire record, and he likes 7.
You can do this with awk:
awk 'BEGIN{RS="";FS="\n";OFS=","}{print $1,$2}' file
That sets the record separator to blank lines, the field separator to newlines and the output field separator to a comma.
Output:
example entry &with/ special characters,next line (any characters)
next %*entry,more words
Simple sed command,
sed ':a;N;$!ba;s/\n/, /g;s/, , /\n/g' file
:a;N;$!ba;s/\n/, /g -> According to this answer, this code replaces all the new lines with ,(comma and space).
So After running only the first command, the output would be
example entry &with/ special characters, next line (any characters), , next %*entry, more words
s/, , /\n/g - > Replacing , , with new line in the above output will give you the desired result.
example entry &with/ special characters, next line (any characters)
next %*entry, more words
This might work for you (GNU sed):
sed ':a;$!N;/.\n./s/\n/, /;ta;/^[^\n]/P;D' file
Append the next line to the current line and if there are characters either side of the newline substitute the newline with a comma and a space and then repeat. Eventually an empty line or the end-of-file will be reached, then only print the next line if it is not empty.
Another version but a little more sofisticated (allowing for white space in the empty line) would be:
sed ':a;$!N;/^\s*$/M!s/\n/, /;ta;/\`\s*$/M!P;D' file
sed -n '1h;1!H
$ {x
s/\([^[:cntrl:]]\)\n\([^[:cntrl:]]\)/\1, \2/g
s/\(\n\)\n\{1,\}/\1/g
p
}' YourFile
change all after loading file in buffer. Could be done "on the fly" while reading the file and based on empty line or not.
use -e on GNU sed

Resources