Ideally I want to avoid using gems, as I am currently using: pdf-reader, combine-pdf and origami
Each gem if it comes across a damaged pdf sometimes doesn't send an exception but stays there and does nothing.
So I would like you to please help me with a code to see if the file is damaged or not.
I have noticed that some pdf files do not have hexadecimal (hex 25 50 44 46 | %PDF) but I'm afraid it's not a universal solution.
Besides all those gems sometimes throw exceptions even when the pdf does work, but at least if I'm sure the pdf works, I'll know what to do.
I could start there. How do I read hexadecimal with ruby? Is the only way to check a pdf?
I've hit this situation before when verifying grub bootloaders with ruby. I found the easiest solution was to do a pre-check of the hex that I know is supposed to exist. Something along the lines of this:
result = `hexdump pdf_file.pdf | head -n 1`
valid_pdf = result.split(" ")[1..2] == ["2550", "4446"]
Over time you can expand your check to look for other bad pdfs ahead of time.
One good practice to circumvent the locks on your pdf hangs, is to use the Timeout functionality in ruby This way you can properly exit and not have to forcefully shutdown your program.
result = IO.binread('file.pdf', 4).unpack("H*").first
valid_pdf = result == '25504446'
Will do the comparison for those first 4 bytes
File.read(pdf_filepath, 4) == "%PDF"
Related
I am a novice Go lang programmer,trying to learn Go lang features.I wanted to split a large csv file into multiple files in GO lang, each file containing the header.How do i do this? I have searched everywhere but couldnt get the right solution.Any help in this regard will be greatly appreciated.
Also please suggest me a good book for reference.
Thanking You
Depending on your shell fu this problem might be better suited for common shell utilities but you specifically mentioned go.
Let's think through the problem.
How big is this csv file? Are we talking 100 lines or is it 5G ?
If it's smallish I typically use this:
http://golang.org/pkg/io/ioutil/#ReadFile
However, this package also exists:
http://golang.org/pkg/encoding/csv/
Regardless - let's return to the abstraction of the problem. You have a header (which is the first line) and then the rest of the document.
So what we probably want to do (if ignoring csv for the moment) is to read in our file.
Then we want to split the file body by all the newlines in it.
You can use this to do so:
http://golang.org/pkg/strings/#Split
You didn't mention but do you know how many files you want to split by or would you rather split by the line count or byte count? What's the actual limitation here?
Generally it's not going to be file count but if we pretend it is we simply want to divide our line count by our expected file count to give lines/file.
Now we can take slices of the appropriate size and write the file back out via:
http://golang.org/pkg/io/ioutil/#WriteFile
A trick I use sometime to help think me threw these things is to write down our mission statement.
"I want to split a large csv file into multiple files in go"
Then I start breaking that up into pieces but take the divide/conquer approach - don't try to solve the entire problem in one go - just break it up to where you can think about it.
Also - make gratiutious use of pseudo-code until you can comfortably write the real code itself. Sometimes it helps to just write a short comment inline with how you think the code should flow and then get it down to the smallest portion that you can code and work from there.
By the way - many of the golang.org packages have example links where you can literally run in your browser the example code and cut/paste that to your own local environment.
Also, I know I'll catch some haters with this - but as for books - imo - you are going to learn a lot faster just by trying to get things working rather than reading. Action trumps passivity always. Don't be afraid to fail.
Here is a package that might help. You can set a necessary chunk size in bytes and a file will be split on an appropriate amount of chunks.
I have come up with a method to determine encoding (or at least a guess at it) for a file that I pass in:
def encoding_type(file_path)
File.read(file_path).encoding.name
end
The problem with this is that I have a file that is 15GB, so that means the entire file is being read into memory.
Is there anyway to accomplish what I am doing in this method without needing to read the entire file into memory?
The file -mime command will return the mime type and encoding of the file:
file -mime myfile
myfile: text/plain; charset=iso-8859-1
def detect_charset(file_path)
`file --mime #{file_path}`.strip.split('charset=').last
rescue => e
Rails.logger.warn "Unable to determine charset of #{file_path}"
Rails.logger.warn "Error: #{e.message}"
end
The method you suggest in your question will not do what you think. It will simply set the file to the Encoding.default_internal encoding, possibly after transcoding it from Encoding.default_external. These are both usually UTF-8. The encoding is going to always be Encoding.default_internal after you run that code, it is not guessing or determining the encoding from the actual file.
If you have a file and you really don't know what encoding it is, you indeed will have to guess. There's no way to be 100% sure you've gotten it right as the author intended (and some files are corrupt and mixed encoding or not legal in any encoding).
There are libraries with heuristics meant to try and guess (they won't be right all the time).
Here's one, which I've never actually used myself, but the likelyist prospect I found in 10 minutes of googling: https://github.com/oleander/rchardet There might be other ruby gems for this. You could also use ruby system() to call a linux command line utility that tries to do this as well, someone above mentions the Linux file command.
If you don't want to load the entire file in to test it, you can certainly just load part of it in. Probably the chardet library will work more reliably the more it's got, but, sure, just read the first X bytes of the file in and then ask chardet to guess it's encoding.
require 'chardet19'
first1000bytes = File.read(file, 1000)
cd = CharDet.detect(first1000bytes)
cd.encoding
cd.confidence
You can also always check to see if any string in ruby is valid for the encoding it's set at:
str.valid_encoding?
So you could simply go through a variety of encodings and see if it's valid:
orig_encoding = str.encoding
str.force_encoding("ISO-8859-1").valid_encoding?
str.force_encoding("UTF-8").valid_encoding?
str.force_enocding(orig_encoding) # put it back to what it was
But it's certainly possible for a file to be valid in more than one encoding, or to be valid in a given encoding but read as nonsense by humans in that encoding.
If you have your best guess encoding, but it's still not valid_encoding? for that encoding, it may just have a few bad bytes in it. You can remove them with String.scrub in ruby 2.1, or with this pure-ruby backport of String.scrub in other ruby versions.
Hope this helps give you some idea of what you're dealing with and what your options are.
I have over 120 .txt files (all named like s1.txt, s2.txt, ..., s120.txt) that I need to convert to ASCII extension to use in MATLAB.
my .txt (comma , delimited .txt) files look like the following:
20080102,43.0300,3,9.493,569.567,34174.027,34174027
20080102,43.0600,3,9.498,569.897,34193.801,34193801
In MATLAB I wish to use code similar to the following:
for i = svec;
%# where svec = [1 2 13 15] some random number between 1 and 120.
eval(['load %mydirectory', eval(['s',int2str(i)]),'.ascii']);
end;
If I am not mistaken I can't use the above command with .txt files and therefore I must use ASCII files.
Since I have a lot of files to convert and they are large in size, is there a quick way to convert all my files via MATLAB, or perhaps there is a great converting software available for Mac on the web? Would anyone have a better suggestion than using the code above?
Adding to nrz's answer:
I'm not sure what you want to do exactly, but know that you can open any file in MATLAB, both as text (ASCII) or in binary mode. The latter can be achieved using fread.
As a side note, you also asked for a better suggestion for your code.
Well, what did you try to achieve with the two eval invocations? Why not call the commands directly? Do this instead:
for i = svec
load (['%mydirectory\s', int2str(i), '.txt'], '-ascii');
end
I also took the liberty to add a backslash that I think you had omitted.
In most cases, you'd be better off without using eval. Check the alternatives...
Can you show an example file? Not every text file is valid for load command. If your file is not in a valid format, changing the extension part of filename from .txt to .ascii doesn't help at all. Instead, in that case the data must be either converted to a valid format for load command or, alternatively, loaded into MATLAB by some other means eg. by using fscanf or xlsread. File structure is needed for both ways to solve this.
See also load command in matlab loading blank file.
A slightly cleaner way:
for i=1:120
fname = fullfile('mydirectory', sprintf('s%d.txt',i));
X = load(fname, '-ascii');
end
This question already has answers here:
Closed 11 years ago.
Possible Duplicate:
How to read a file from bottom to top in Ruby?
In the course of working on my Ruby program, I had the Eureka Moment that it would be much simpler to write if I were able to parse the text files backwards, rather than forward.
It seems like it would be simple to simply read the text file, line by line, into an array, then write the lines backwards into a text file, parse this temp file forwards (which would now effectively be going backwards) make any necessary changes, re-catalog the resulting lines into an array, and write them backwards a second time, restoring the original direction, before saving the modifications as a new file.
While feasible in theory, I see several problems with it in practice, the biggest of which is that if the size of the text file is very large, a single array will not be able to hold the entirety of the document at once.
Is there a more elegant way to accomplish reading a text file backwards?
If you are not using lots of UTF-8 characters you can use Elif library which work just like File.open. just load Elif and replace File.open with Elif.open
Elif.open('read.txt', "r").each_line{ |s|
puts s
}
This is a great library, but the only problem I am experiencing right now is that it have several problems with line ending in UTF-8. I now have to re-think a way to iterate my files
Additional Details
As I google a way to answer this problem for UTF-8 reverse file reading. I found a way that already implemented by File library:
To read a file backward you can try the ff code:
File.readlines('manga_search.test.txt').reverse_each{ |s|
puts s
}
This can do a good job as well
There's no software limit to Ruby array. There are some memory limitations though: Array size too big - ruby
Your approach would work much faster if you can read everything into memory, operate there and write it back to disk. Assuming the file fits in memory of course.
Let's say your lines are 80 chars wide on average, and you want to read 100 lines. If you want it efficient (as opposed to implemented with the least amount of code), then go back 80*100 bytes from the end (using seek with the "relative to end" option), then read ONE line (this is likely a partial one, so throw it away). Remember your current position via tell, then read everything up until the end.
You now have either more or less than a 100 lines in memory. If less, go back (100+1.5*no_of_missing_lines)*80, and repeat the above steps, but only reading lines until you reach your remembered position from before. Rinse and repeat.
How about just going to the end of the file and iterating backwards over each char until you reach a newline, read the line, and so on? Not elegant, but certainly effective.
Example: https://gist.github.com/1117141
I can't think of an elegant way to do something so unusual as this, but you could probably do it using the file-tail library. It uses random access files in Ruby to read it backwards (and you could even do it yourself, look for random access at this link).
You could go throughout the file once forward, storing only the byte offset of each \n instead of storing the full string for each line. Then you traverse your offset array backward and can use ios.sysseek and ios.sysread to get lines out of the file. Unless your file is truly enormous, that should alleviate the memory issue.
Admitedly, this absolutely fails the elegance test.
I have a very large PDF File (200,000 KB or more) which contains a series of pages containing nothing but tables. I'd like to somehow parse this information using Ruby, and import the resultant data into a MySQL database.
Does anyone know of any methods for pulling this data out of the PDF? The data is formatted in the following manner:
Name | Address | Cash Reported | Year Reported | Holder Name
Sometimes the Name field overflows into the address field, in which case the remaining columns are displayed on the following line.
Due to the irregular format, I've been stuck on figuring this out. At the very least, could anyone point me to a Ruby PDF library for this task?
UPDATE: I accidentally provided incorrect information! The actual size of the file is 300 MB, or 300,000 KB. I made the change above to reflect this.
I assume you can copy'n'paste text snippets without problems when your PDF is opened in Acrobat Reader or some other PDF Viewer?
Before trying to parse and extract text from such monster files programmatically (even if it's 200 MByte only -- for simple text in tables that's huuuuge, unless you have 200000 pages...), I would proceed like this:
Try to sanitize the file first by re-distilling it.
Try with different CLI tools to extract the text into a .txt file.
This is a matter of minutes. Writing a Ruby program to do this certainly is a matter of hours, days or weeks (depending on your knowledge about the PDF fileformat internals... I suspect you don't have much experience of that yet).
If "2." works, you may halfway be done already. If it works, you also know that doing it programmatically with Ruby is a job that can in principle be solved. If "2." doesn't work, you know it may be extremely hard to achieve programmatically.
Sanitize the 'Monster.pdf':
I suggest to use Ghostscript. You can also use Adobe Acrobat Distiller if you have access to it.
gswin32c.exe ^
-o Monster-PDF-sanitized ^
-sDEVICE=pdfwrite ^
-f Monster.pdf
(I'm curious how much that single command will make your output PDF shrink if compared to the input.)
Extract text from PDF:
I suggest to first try pdftotext.exe (from the XPDF folks). There are other, a bit more inconvenient methods available too, but this might do the job already:
pdftotext.exe ^
-f 1 ^
-l 10 ^
-layout ^
-eol dos ^
-enc Latin1 ^
-nopgbrk ^
Monster-PDF-sanitized.pdf ^
first-10-pages-from-Monster-PDF-sanitized.txt
This will not extract all pages but only 1-10 (for proof of concept, to see if it works at all). To extract from every page, just leave off the -f 1 -l 10 parameter. You may need to tweak the encoding by changing the parameter to -enc ASCII7 (or UTF-8, UCS-2).
If this doesn't work the quick'n'easy way (because, as sometimes happens, some font in the original PDF uses "custom encoding vector") you should ask a new question, describing the details of your findings so far. Then you need to resort bigger calibres to shoot down the problem.
At the very least, could anyone point
me to a Ruby PDF library for this
task?
If you haven't done so, you should check out the two previous questions: "Ruby: Reading PDF files," and "ruby pdf parsing gem/library." PDF::Reader, PDF::Toolkit, and Docsplit are some of the relatively popular suggested libraries. There is even a suggestion of using JRuby and some Java PDF library parser.
I'm not sure if any of these solutions is actually suitable for your problem, especially that you are dealing with such huge PDF files. So unless someone offers a more informative answer, perhaps you should select a library or two and take them for a test drive.
This will be a difficult task, as rendered PDFs have no concept of tabular layout, just lines and text in predetermined locations. It may not be possible to determine what are rows and what are columns, but it may depend on the PDF itself.
The java libraries are the most robust, and may do more than just extract text. So I would look into JRuby and iText or PDFbox.
Check whether there is any structured content in the PDF. I wrote a blog article explaining this at http://www.jpedal.org/PDFblog/?p=410
If not, you will need to build it.
Maybe the Prawn ruby library? link text