I have a compressed file starting with the hex:
8F AF AC 84
Can any one tell me what's the type of compression or at least a program that decompress my file?
According to this link that would be a .pmd file.
An alternative extension would be .ppmd. It is a file compressed using Prediction by Partial Matching.
Hope that helps, but I'm voting to close as this has nothing to do with programming.
Related
The "raw bytes module" on random.org is disabled (http://www.random.org/bytes/), and I need to know the output format they used to have.
More specificly: I have a program using 1024 raw hex bytes from random.org and I need to implement this feature.
What was the format they used to have? Possible formats I can think of are:
0x1a 0x1b
#1a #1b
1a 1b
1a1b
.......
There are many possibilities... Who may help me?
You can use http://archive.org/web/ to retrieve an old web page that doesn't exists anymore.
For example: http://web.archive.org/web/*/http://www.random.org/bytes/ (sorry but there is a silly problem with the * in the url even if I use the percent encoding code, so copy and paste it, don't click on it)
Then pick a backup at a date which seems to be good.
If it's not ok, go to previous page and pick another backup.
For example I chose 2014/03/01 and voilà
If you're lucky you can even use the form of the backuped page: here's the result with the defaults params.
Prior versions of Apple's iWork suite used a very simple document format:
documents were Bundles of resources (folders, zipped or not)
the bundle contained an index.apxl[z] file describing the document structure in a proprietary but fairly easy to understand schema
iWork '13 has completely redone the format. Documents are still bundles, but what was in the index XML file is now encoded in a set of binary files with type suffix .iwa packed into Index.zip.
In Keynote, for example, there are the following iwa files:
AnnotationAuthorStorage.iwa
CalculationEngine.iwa
Document.iwa
DocumentStylesheet.iwa
MasterSlide-{n}.iwa
Metadata.iwa
Slide{m}.iwa
ThemeStylesheet.iwa
ViewState.iwa
Tables/DataList.iwa
for MasterSlides 1…n and Slides 1…m
The purpose of each of these is quite clear from their naming. The files even appear uncompressed, with essentially all content text directly visible as strings among the binary blobs (albeit with some like RTF/NSAttributedString/similar-related garbage in the midst of the readable ASCII characters).
I have posted the unpacked Index of a simple example Keynote document here: https://github.com/jrk/iwork-13-format.
However, the overall file format is non-obvious to me. Apple has a long history of using simple, platform-standard formats like plists for encoding most of their documents, but there is no clear type tag at the start of the files, and it is not obvious to me what these iwa files are.
Do these files ring any bells? Is there evidence they are in some reasonably comprehensible serialization format?
Rummaging through the Keynote app runtime and class dumps with F-Script, the only evidence I've found is for some use of Protocol Buffers in the serialization classes which seem to be used for iWork, e.g.: https://github.com/nst/iOS-Runtime-Headers/blob/master/PrivateFrameworks/iWorkImport.framework/TSPArchiverBase.h.
Quickly piping a few of the files through protoc --decode_raw with the first 0…16 bytes lopped off produced nothing obviously usable.
I've done some work reverse engineering the format and published my results here. I've written up a description of the format and provided a sample project as well.
Basically, the .iwa files are Protobuf streams compressed using Snappy.
Hope this helps!
Interesting project, I like it! Here is what I have found so far.
The first 4 bytes of each of the iwa files appear to be a length, with a tweak. So it looks like there will not be any 'magic' to verify file type.
Look at Slide1.iwa:
First 4 bytes are 00 79 02 00
File size is 637 bytes
take the first 00 off, and reverse the bytes: 00 02 79
00 02 79 == 633
637 - 633 = 4 bytes that hold the size of the file.
This checks out for the 4 files I looked at: Slide1.iwa, Slide2.iwa, Document.iwa, DocumentStylesheet.iwa
I have several large PDF reports (>500 pages) with grid lines and background shading overlay that I converted from postscript using GhostScript's ps2pdf in a batch process. The PDFs that get created look perfect in the Adobe Reader.
However, when I go to print the PDF from Adobe Reader I get about 4-5 ppm from our Dell laser printer with long, 10+ second pauses between each page. The same report PDF generated from another proprietary process (not GhostScript) yeilds a fast 25+ ppm on the same printer.
The PDF file sizes on both are nearly the same at around 1.5 MB each, but when I print both versions of the PDF to file (i.e. postscript), the GhostScript generated PDF postscript output is about 5 times larger than that of the other (2.7 mil lines vs 675K) or 48 MB vs 9 MB. Looking at the GhostScript output, I see that the background pattern for the grid lines/shading (referenced by "/PatternType1" tag) is defined many thousands of times throughout the file, where it is only defined once in the other PDF output. I believe this constant re-defining of the background pattern is what is bogging down the printer.
Is there a switch/setting to force GhostScript to only define a pattern/image only once? I've tried using the -r and -dPdfsettings=/print switches with no relief.
Patterns (and indeed images) and many other constructs should only be emitted once, you don't need to do anything to have this happen.
Forms, however, do not get reused, and its possible that this is the source of your actual problem. As Kurt Pfiefle says above its not possible to tell without seeing a file which causes the problem.
You could raise a bug report at http://bubgs.ghostscript.com which will give you the opportunity to attach a file. If you do this please do NOT attach a > 500 page file, it would be appreciated if you would try to find the time to create a smaller file which shows the same kind of size inflation.
Without seeing the PostScript file I can't make any suggestions at all.
I've looked at the source PostScript now, and as suspected the problem is indeed the use of a form. This is a comparatively unusual area of PostScript, and its even more unusual to see it actually being used properly.
Because its rare usage, we haven't any impetus to implement the feature to preserve forms in the output PDF, and this is what results in the large PDF. The way the pattern is defined inside the form doesn't help either. You could try defining the pattern separately, at least that way pdfwrite might be able to detect the multiple pattern usage and only emit it once (the pattern contains an imagemask so this may be worthwhile).
This construction:
GS C20 setpattern 384 151 32 1024 RF GR
GS C20 setpattern 384 1175 32 1024 RF GR
is inefficient, you keep re-instantiating the pattern, which is expensive, this:
GS C20 setpattern
384 151 32 1024 RF
384 1175 32 1024 RF
GR
is more efficient
In any event, there's nothing you can do with pdfwrite to really reduce this problem.
'[...] when I print both versions of the PDF to file (i.e. postscript), the GhostScript generated PDF postscript output is about 5 times larger than that of the other (2.7 mil lines vs 675K) or 48 MB vs 9 MB.'
Which version of Ghostscript do you use? (Try gs -v or gswin32c.exe -v or gswin64c.exe -v to find out.)
How exactly do you 'print to file' the PDFs? (Which OS platform, which application, which kind of settings?)
Also, ps2pdf may not be your best option for the batch process. It's a small shell/batch script anyway, which internally calls a Ghostscript command.
Using Ghostscript directly will give you much more control over the result (though its commandline 'usability' is rather inconvenient and awkward -- that's why tools like ps2pdf are so popular...).
Lastly, without direct access to one of your PS input samples for testing (as well as the PDF generated by the proprietary converter) it will not be easy to come up with good suggestions.
I'm creating a program that "hides" an encrypted file at the end of a JPEG. The problem is, when retrieving this encrypted file again, I need to be able to determine when the JPEG it was stored in ends. At first I thought this wouldn't be a problem because I can just run through the file checking for 0xFF and 0xD9, the bytes JPEG uses to end the image. However... I'm noticing that in quite a few JPEGs, this combination of bytes is not exclusive... So my program thinks the image has ended randomly half way through it.
I'm thinking there must be a set way of expressing that a JPEG has finished, otherwise me adding a load of bytes to the end of the file would obviously corrupt it... Is there a practical way to do this?
You should read the JFIF file format specifications
Well, there are always two places in the file that you can find with 100% reliability. The beginning and the end. So, when you add the hidden file, add another 4 bytes that stores the original length of the file and a special signature that's always distinct. When reading it back, first seek to the end - 8 and read that length and signature. Then just seek to that position.
You should read my answer on this question "Detect Eof for JPG images".
You're likely running into the thumbnail in the header, when moving through the file you should find that most marked segments contain a length indicator, here's a reference for which do and which don't. You can skip the bytes within those segments as the true eoi marker will not be within them.
Within the actual jpeg compressed data, any FF byte should be followed either by 00 (the zero byte is then discarded), or by FE to mark a comment (which has a length indicator, and can be skipped as described above).
Theoretically the only way you encounter a false eoi reading in the compressed data is within a comment.
I have a vast quantity of data (>800Mb) that takes an age to load into Matlab mainly because it's split up into tiny files each <20kB. They are all in a proprietary format which I can read and load into Matlab, its just that it takes so long.
I am thinking of reading the data in and writing it out to some sort of binary file which should make it quicker for subsequent reads (of which there may be many, hence me needing a speed-up).
So, my question is, what would be the best format to write them to disk to make reading them back again as quick as possible?
I guess I have the option of writing using fwrite, or just saving the variables from matlab. I think I'd prefer the fwrite option so if needed, I could read them from another package/language...
Look in to the HDF5 data format, used by recent versions of MATLAB as the underlying format for .mat files. You can manually create your own HDF5 files using the hdf5write function, and this file can be accessed from any language that has HDF bindings (most common languages do, or at least offer a way to integrate C code that can call the HDF5 library).
If your data is numeric (and of the same datatype), you might find it hard to beat the performance of plain binary (fwrite).
Binary mat-files are the fastest. Just use
save myfile.mat <var_a> <var_b> ...
I achieved an amazing speed up in loading when I used the '-v6' option to save the .mat files like so:
save(matlabTrainingFile, 'Xtrain', 'ytrain', '-v6');
Here's the size of the matrices that I used in my test ...
Attr Name Size Bytes Class
==== ==== ==== ===== =====
g Xtest 1430x4000 45760000 double
g Xtrain 3411x4000 109152000 double
g Xval 1370x4000 43840000 double
g ytest 1430x1 11440 double
g ytrain 3411x1 27288 double
g yval 1370x1 10960 double
... and the performance improvements that we achieved:
Before the change:
time to load the training data: 78 SECONDS!!!
time to load validation data: 32
time to load the test data: 35
After the change:
time to load the training data: 0 SECONDS!!!
time to load validation data: 0
time to load the test data: 0
Apparently the reason the reason this works so well is that the old version 6 version used less compression the than new versions.
So your file sizes will be bigger, but they will load WAY faster.