I am a vim user and can use some basic awk or bash commands. Now I have a text (vcf) file with size more than 20G. What I wanted is to move the line #69 to below line#66:
$less huge.vcf
...
66 ##contig=<ID=9,length=124595110>
67 ##contig=<ID=X,length=171031299>
68 ##contig=<ID=Y,length=91744698>
69 ##contig=<ID=MT,length=16299>
...
What I wanted is:
...
66 ##contig=<ID=9,length=124595110>
67 ##contig=<ID=MT,length=16299>
68 ##contig=<ID=X,length=171031299>
69 ##contig=<ID=Y,length=91744698>
...
I tried to open and edit it using vim (LargeFile plugin installed), but still not working very well.
The easy approach is to copy the section you want to edit out of your file, modify it in-place, then copy it back in.
# extract the first hundred lines
head -n 100 huge.txt >start.txt
# modify that extracted subset
vim start.txt
# copy that section back into the beginning of larger file
dd if=start.txt of=huge.txt conv=notrunc
Note that this only works if your edits don't change the size of the section being modified. That is to say -- make sure that start.txt has the exact same size in bytes after being modified that it had before.
Here's an awk version:
$ awk 'NR>=3 && NR<=4{b=b (b==""?"":ORS) $0;next}1;NR==5 {print b}' file
...
66 ##contig=<ID=9,length=124595110>
69 ##contig=<ID=MT,length=16299>
67 ##contig=<ID=X,length=171031299>
68 ##contig=<ID=Y,length=91744698>
...
You need to change the line numbers in the code, though. 3 -> 67, 4 -> 68 and 5 -> 69 and redirect the output to a new file. If you' like it to perform inplace, use i inplace for GNU awk.
Related
I have been working on some device that allowed login via telnet and I extracted some data from devices and made some reports, without any problems. recently, I had to switch to SSH while rest of the script is all the same, only login procedure has been changed from telnet to SSH. after switching to SSH, I am facing some problem with the data extracted that there are some invalid characters in some of the lines, below is an example: as can be seen, there is an invalid character after PON7 in the line:
OLT:LT6.PON7.ONT1,ALARM,Date time,
problem is that this invalid character is not even visible in the bash/csv file, but it was discovered when I copied the line in notepad++ or while posting it here.
now I have two problems:
1st: if someone knows what is causing these invalid characters while switching between telnet/ssh.
2nd: how to deal with this invalid character in BASH as it is not even visible in BASH, but this report is being used somewhere and these invalid characters are causing problems.
Edit:
Pasting the text into a text-to-hex converter produces this:
4f 4c 54 3a 4c 54 36 2e 50 4f 4e 37 11 2e 4f 4e 54 31 2c 41 4c 41 52 4d 2c 44 61 74 65 20 74 69 6d 65 2c
It looks like there's a DC1 character (hex 11) between the "7" and the ".".
Unfortunately, this edit also has the side effect of removing the character from the sample text.
Passing your text through a text to hexadecimal converter shows that the invisible character is an ASCII DC1 character (hex 11, octal 021). This character is also known as Ctrl-Q or XON. It's sometimes used in flow control.
In a bash script, you could filter it out using the tr program:
echo $badtext | tr -d '\021'
SSH doesn't inherently insert DC1 characters into text streams. If you're getting a DC1 character in the output from a device, presumably the device sent that character.
I attempt to save a few lines into an output file called Temporal_regions.txt in my folder, so I can continue with further process.
I took a look at some of other post, and they suggested to do this way:
echo "some file content" > /path/to/outputfile
So my returned output looks like this:
1 3 13579 586 Right-Temporal 72 73 66 54
2 5 24680 587 Left-Temporal 89 44 65 56
3 7 34552 599 Right-Temporal 72 75 66 54
4 8 24451 480 Left-Temporal 68 57 47 66
*All of these lines were individually returned as output using the grep command from another file (call it TR.stats)
If I want to store these outputs into the .txt (call it TemperolRegion.txt), I would have to first create the output file right? What would be the command to do this?
Then, I can just simply use the above suggested way to store to the .txt file?
grep "Left-Temporal" TR.stats > /path/to/TemperolRegion.txt
I can't seem to get the commands right.
It sounds like you are missing the directories along the path. In Linux, you have to create the parent directory before you create the file. You can create every directory along the path to the parent by using mkdir with the -p option.
mkdir -p /path/to
grep "Left-Temporal" TR.stats > /path/to/TemperolRegion.txt
Another problem you might have is that you are not in the right directory to TR.stats. In that case you should use the absolute path for that file as well.
grep "Left-Temporal" /path/to/TR.stats > /path/to/TemperolRegion.txt
Good afternoon.
I am running pcl6.exe version 9.15 on Windows 8.1.
I am running into a problem where pcl6.exe in silently converting any APOSTROPHE characters into RIGHT SINGLE QUOTATION MARK characters using the 16602 typeface in a PCL5 file.
Here is the command line I am using:
pcl6.exe -dNOPAUSE -sDEVICE=txtwrite -sOutputFile=test.txt test.prn
test.prn input (hex)
1B 28 30 55 1B 28 73 31 70 31 30 76 31 36 36 30 32 54 1B 26 61 30 76 30 48 3E 27 3C
test.prn input (text ['.' is the escape character])
.(0U.(s1p10v16602T.&a0v0H>'<
test.txt output (hex)
20 20 3E E2 80 99 3C 0D 0A
test.txt output (text)
>’<..
expected test.txt output (hex)
20 20 3E 27 3C 0D 0A
expected test.txt output (text)
>'<..
Is there a flag or an option somewhere that can disable this conversion?
Thank you for your time.
txtwrite does the best it can with the input, PCL tends not to have much information in the PCL file to allow us to determine what the glyph should be (PostScript is better, and PDF often still better).
If you think there is a real problem I would suggest your best bet is to open a bug report. Apart from anything else, I would need to see the PCL file to determine what's going on. Most lkely the character code you are using corresponds to an apostrophe, which in the specific font is a right quote. There is no way for the text extraction device to know what shape the font will draw in response to a character code. At least, not in PCL
The problem was caused by the symbolset.
The sample was using the PCL ISO 6: ASCII symbolset (code 0U )
http://www.pcl.to/symbolset/pcl_0u.pdf
According to the 0U symbolset reference, APOSTROPHE (0x27) is replaced with RIGHT SINGLE QUOTATION MARK (0x2019). Pcl6.exe then converts these UTF-16 bytes into their UTF-8 equivalent: 0xE28099
I am not an experienced ruby programmer, so bear with me. I have a problem with this specific text file containing two lines ( this issue shows up only on occasions) :
trim(0, 15447)
0, 15447
I am trying to read these two lines with the following code:
File.open(trim).each do |line|
puts line
end
I normally obtain the normal output, but here, I get only one line, with some characters missing:
0, 1544715447)
If I want to check the character codes, I get this:
irb(main):120:0> File.open(trim).each do |line|
irb(main):121:1* puts '========================'
irb(main):122:1> puts line
irb(main):123:1> puts '........................'
irb(main):124:1> puts line.each_byte {|c| print c, ' ' }
irb(main):125:1> end
========================
0, 1544715447)
........................
116 114 105 109 40 48 44 32 49 53 52 52 55 41 13 48 44 32 49 53 52 52 55 trim(0,0, 15447
=> #<File:E:\Public\Public_videos\Soccer\1995_0129_odp_es\950129-ODP_&m3_trim30.txt>
I frankly don't understand what is going on, as I don't see any hidden character, and this happen randomly, but consistently with some files.
Any suggestion to help me understand or avoid this issue would be greatly appreciated.
What happened is that your file had two "lines" separated by a carraige return character, and not a linefeed.
You showed the bytes in your file as
116 114 105 109 40 48 44 32 49 53 52 52 55 41 13 48 44 32 49 53 52 52 55
That 13 is a carriage return, which is sometimes "displayed" by the writer going back to the start of the line it is writing.
So first it wrote out
trim(0, 15447)
then it went back to the start of the same line and wrote
0, 15447
overlaying the initial line! What do you end up with?
0, 1544715447)
Your "problem" is probably best fixed by reencoding that text file of yours to use a better way to separate lines. On Unix systems, including OSX these days, the line terminator is character 10 - known as LINE FEED. Windows uses the two-character combination 13 10 (CR LF). Only old Mac systems to my knowledge used the 13.
Many text editors today will allow you to select a "line ending" option, so you might be able to just open that file, then save it using a different line ending option. FWIW my guess is that you are using Windows now, which is known for rendering CRs and LFs differently than *Nix systems.
I am writing a script that takes a UTF-16 encoded text file as input and outputs a UTF-16 encoded text file.
use open "encoding(UTF-16)";
open INPUT, "< input.txt"
or die "cannot open > input.txt: $!\n";
open(OUTPUT,"> output.txt");
while(<INPUT>) {
print OUTPUT "$_\n"
}
Let's just say that my program writes everything from input.txt into output.txt.
This WORKS perfectly fine in my cygwin environment, which is using "This is perl 5, version 14, subversion 2 (v5.14.2) built for cygwin-thread-multi-64int"
But in my Windows environment, which is using "This is perl 5, version 12, subversion 3 (v5.12.3) built for MSWin32-x64-multi-thread",
Every line in output.txt is pre-pended with crazy symbols except the first line.
For example:
<FIRST LINE OF TEXT>
㈀ Ⰰ ㈀Ⰰ 嘀愀 ㌀ 䌀栀椀愀 䐀⸀⸀⸀ 儀甀愀渀最 䠀ഊ<SECOND LINE OF TEXT>
...
Can anyone give some insight on why it works on cygwin but not windows?
EDIT: After printing the encoded layers as suggested.
In Windows environment:
unix
crlf
encoding(UTF-16)
utf8
unix
crlf
encoding(UTF-16)
utf8
In Cygwin environment:
unix
perlio
encoding(UTF-16)
utf8
unix
perlio
encoding(UTF-16)
utf8
The only difference is between the perlio and crlf layer.
[ I was going to wait and give a thorough answer, but it's probably better if I give you a quick answer than nothing. ]
The problem is that crlf and the encoding layers are in the wrong order. Not your fault.
For example, say you do print "a\nb\nc\n"; using UTF-16le (since it's simpler and it's probably what you actually want). You'd end up with
61 00 0D 0A 00 62 00 0D 0A 00 63 00 0D 0A 00
instead of
61 00 0D 00 0A 00 62 00 0D 00 0A 00 63 00 0D 00 0A 00
I don't think you can get the right results with the open pragma or with binmode, but it can be done using open.
open(my $fh, '<:raw:encoding(UTF-16):crlf', $qfn)
You'll need to append a :utf8 with some older version, IIRC.
It works on cygwin because the crlf layer is only added on Windows. There you'd get
61 00 0A 00 62 00 0A 00 63 00 0A 00
You have a typo in your encoding. It should be use open ":encoding(UTF-16)" Note the colon. I don't know why it would work on Cygwin but not Windows, but could also be a 5.12 vs 5.14 thing. Perl seems to make up for it, but it could be what's causing your problem.
If that doesn't do it, check if the encoding is being applied to your filehandles.
print map { "$_\n" } PerlIO::get_layers(*INPUT);
print map { "$_\n" } PerlIO::get_layers(*OUTPUT);
Use lexical filehandles (ie. open my $fh, "<", $file). Glob filehandles are global and thus something else in your program might be interfering with them.
If all that checks out, if lexical filehandles are getting the encoding(UTF-16) applied, let us know and we can try something else.
UPDATE: This may provide your answer: "BOMed UTF files are not suitable for streaming models, and they must be slurped as binary files instead." Looks like you have to read the file in as binary and do the encoding as a string. This may have been a bug fixed in 5.14.
UPDATE 2: Yep, I can confirm this is a bug that was fixed in 5.14.