I tried to copy paste content from word document (.docx) to a .txt file and made it read by a nltk corpus reader to find number of paragraph. It returns almost 30 paragraph as one paragraph. I manually entered a line break in .txt file and it returned 30 paragraphs.
import nltk
corpusReader = nltk.corpus.reader.plaintext.PlaintextCorpusReader(".", "d.txt")
print "Paragraphs =", len(corpusReader.paras())
Is it possible for PlaintextCorpus reader to read .docx?
While copy pasting from .docx to .txt, How to preserve line break?
Is there a way using python,where I open .txt file and find ?!or . or ... and followed by some blank spaces(4 in number) and press "enter" to create line break automatically?
break.
Edit 1.
Walked the para_block_reader=read_line_block path, but it always gives one paragraph count extra.
import nltk
from nltk.corpus.reader.util import *
corpusReader = nltk.corpus.reader.plaintext.PlaintextCorpusReader(".", "d.txt",para_block_reader=read_line_block)
print "Paragraphs =", len(corpusReader.paras())
The plaintext corpus reader can only read plain-text files. There are Python libraries that can read docx, but that will not address your problem, which is that Word delimits paragraphs by a single line break, but plaintext documents traditionally understand a paragraph boundary to be a blank line-- i.e., two successive newlines. In other words, your export method does preserve the newlines; it's just that there's not enough of them.
So there is an easy way to fix up your texts so that paragraphs are recognized without extra to-do: Once you've written out your plaintext file (which you can do from Word's Save As... menu or by cutting and pasting), post-process it like this (add encoding= arguments as necessary):
with open("my_plaintext.txt") as oldfile:
content = oldfile.read()
content = re.sub("\n", "\n\n", content)
with open("my_plaintext_fixed.txt", "w") as newfile:
newfile.write(content)
You can now read myplaintext_fixed.txt" with thePlaintextCorpusReader`, and everything will work as expected.
The source code for PlainTextCorpus reader is the first class defined on this page, it is fairly simple.
It has sub-components, if you don't secify them in the constructor it uses the NLTK defaults
para_block_reader (default: read_blankline_block), which says how the document is broken up into paragraphs.
sentence_tokenizer (default: English Punkt), which says how to break a paragraph into sentences
word_tokenizer (default WordPunctTokenizer()), which says how to break a sentence into tokens (words, and symbols).
Note that the defaults may change in different versions, on NLTK. I feel like the default word_tokenizer used to be the Penn tokenizer.
Re: 1.
No PlaintextCorpus reader can not read Docx. It only reads plain text.
I'm sure you can find a python library to convert it
Re 2
Copy and Paste is offtopic for this site, try SuperUser.
I suggest though you instead use option 1 and get a library to do the conversion.
Re 3
Yes, you can do a search and replace using Regex.
import re
def breakup(mystring):
return re.replace(mystring, r"(\.|\!|\.\.\.) ", "\n")
But perhaps instead you might want to swap out your para_block_reader or sent_tokenizer
Related
I am generating a CSV file from a Microsoft SQL database that was provided to me, but somehow there are invalid characters in about two dozen places throughout the text (there are many thousands of lines of data). When I open the CSV in my text editor, they display as red, upside-down question marks (there are two of them in the attached screenshot).
When I copy the character and view the "find/replace" dialog in my text editor, I see this:
\x{0D}
...but I have no idea what that means. I need to modify my script that generates the CSV so it strips these characters out, but I don't know how to identify them. My script is written in Classic ASP.
You can also use RegEx to remove unwanted characters:
Set objRegEx = CreateObject(“VBScript.RegExp”)
objRegEx.Global = True
objRegEx.Pattern = “[^A-Za-z]”
strCSV = objRegEx.Replace(strCSV, “”)
This code is from the following article which explains in details what it does:
How Can I Remove All the Non-Alphabetic Characters in a String?
In your case you will want to add some characters to the Pattern:
^[a-zA-Z0-9!##$&()\\-`.+,/\"]*$
You can simply use the Replace function and specify Chr(191) (or "¿" directly):
Replace(yourCSV, Chr(191), "")
or
Replace(yourCSV, "¿", "")
This will remove the character. If you need to replace it with something else, change the last parameter from "" to a different value ("-" for example).
In general, you can use charmap.exe (Character Map) from Run menu, select Arial, find a symbol and copy it to the clipboard. You can then check its value using Asc("¿"), this will return the ASCII code to use with Chr().
I am trying to write documentation with asciidoctor-pdf and I need to use characters like : ă,â,î,ş,ţ. The pdf output is rendered but the mentioned characters are rendered empty. I am not sure how to handle the issue.
For example:
I wrote this code:
= Document Title
Doc Writer <doc#example.com>
:doctype: book
:source-highlighter: coderay
:listing-caption: Listing
// Uncomment next line to set page size (default is Letter)
//:pdf-page-size: A4
A simple http://asciidoc.org[AsciiDoc] document.
== Introducţie
A paragraph followed by a simple list with square bullets.
And the result was the word Introducţie rendered as Introduc ie and finally the error:
/usr/local/rvm/gems/ruby-2.2.2/gems/pdf-core-0.2.5/lib/pdf/core/pdf_object.rb:55: warning: regexp match /.../n against to UTF-8 string
Can be a system encoding configuration problem?
Do I need to set different encoding configuration in ruby?
Thank you.
I think that if you want to be sure, you can always use the decimal entity references form. For the latin small Letter T with cedilla it is: ţ
Check this table for the complete list:
List of Unicode characters
In addition, if you want to use this special char in a title, there was an issue with it:
Section id with characters outside of Windows-1252 encoding causes warning
It seems to be fixed now, but I did not verify it.
One of possible ways to write such special characters in titles is to declare them in preamble of your asciidoc document, for example,
:t-cedil: ţ
and to call it in the main text
== pass:normal[Test-{t-cedil}]
So your title will look like
Test-ţ
I am looking for a way to check if a PDF is missing an end of file character. So far I have found I can use the pdf-reader gem and catch the MalformedPDFError exception, or of course I could simply open the whole file and check if the last character was an EOF. I need to process lots of potentially large PDF's and I want to load as little memory as possible.
Note: all the files I want to detect will be lacking the EOF marker, so I feel like this is a little more specific scenario then detecting general PDF "corruption". What is the best, fast way to do this?
TL;DR
Looking for %%EOF, with or without related structures, is relatively speedy even if you scan the entirety of a reasonably-sized PDF file. However, you can gain a speed boost if you restrict your search to the last kilobyte, or the last 6 or 7 bytes if you simply want to validate that %%EOF\n is the only thing on the last line of a PDF file.
Note that only a full parse of the PDF file can tell you if the file is corrupted, and only a full parse of the File Trailer can fully validate the trailer's conformance to standards. However, I provide two approximations below that are reasonably accurate and relatively fast in the general case.
Check Last Kilobyte for File Trailer
This option is fairly fast, since it only looks at the tail of the file, and uses a string comparison rather than a regular expression match. According to Adobe:
Acrobat viewers require only that the %%EOF marker appear somewhere within the last 1024 bytes of the file.
Therefore, the following will work by looking for the file trailer instruction within that range:
def valid_file_trailer? filename
File.open filename { |f| f.seek -1024, :END; f.read.include? '%%EOF' }
end
A Stricter Check of the File Trailer via Regex
However, the ISO standard is both more complex and a lot more strict. It says, in part:
The last line of the file shall contain only the end-of-file marker, %%EOF. The two preceding lines shall contain, one per line and in order, the keyword startxref and the byte offset in the decoded stream from the beginning of the file to the beginning of the xref keyword in the last cross-reference section. The startxref line shall be preceded by the trailer dictionary, consisting of the keyword trailer followed by a series of key-value pairs enclosed in double angle brackets (<< … >>) (using LESS-THAN SIGNs (3Ch) and GREATER-THAN SIGNs (3Eh)).
Without actually parsing the PDF, you won't be able to validate this with perfect accuracy using regular expressions, but you can get close. For example:
def valid_file_trailer? filename
pattern = /^startxref\n\d+\n%%EOF\n\z/m
File.open(filename) { |f| !!(f.read.scrub =~ pattern) }
end
I have a huge txt file made using python. When I'm trying to sort it using Notepad++/TextFX it returns error: This tool is not compatible with binary text. Please select text without [NUL] characters.. Does it means that I have non-printable chars in this txt file? Is it possible to convert this file to compatible format so I could sort it using TextFX?
EDIT: I used mode 'a' in Python to write this file.
Thank you for your advices.
using TextFX in Notepad++ you could try the following:
Mark the suspicious part or the whole text
Select TextFX, TextFX Characters, Zap all nonprintable characters to #. (The last entry in that submenu.)
All the problematic characters should have been replaced with "#", you can then search for "#".
Another idea is the function: Search, "Find characters in range". Check "My range:" and enter "0" and "0" as range, to find [Nul] characters.
Lars
How would I replace the first line of a text file or xml file using ruby? I'm having problems replicating a strange xml API and need to edit the document instruction after I create the XML file. It is strange that I have to do this, but in this case it is necessary.
If you are editing XML, use a tool specially designed for the task. sub, gsub and regex are not good choices if the XML being manipulated is not under your control.
Use Nokogiri to parse the XML, locate nodes and change them, then emit the updated XML.
There are many examples on SO showing how to do this, plus the tutorials on the Nokogiri site.
There are a couple different ways you can do this:
Use ARGF (assuming that your ruby program takes a file name as a command line parameter)
ruby -e "puts ARGF.to_a[n]" yourfile.xml
Open the file regularly then read n lines
File.open("yourfile") { |f|
line = nil
n.times { line = f.gets }
puts line
}
This approach is less intensive on memory, as only a single line is considered at a time, it is also the simplest method.
Use IO.readlines() (will only work if the entire file will fit in memory!)
IO.readlines("yourfile")[n]
IO.readlines(...) will read every line from your file into an array.
Where n in all the above examples is the nth line of your file.