Is there a Ruby library that will allow me to either calculate the checksum of an MP3 file's audio data (minus the metadata) or allow me to read in an MP3's audio data to calculate the checksum myself?
I'm looking for something like this:
mp3 = Mp3Lib::MP3.new('/path/to/song.mp3')
mp3.audio.sha1sum # => the sha1 checksum of _only_ the audio, minus the metadata
I found Mp3Info, but it seems a bit tedious. When initializing an Mp3Info object, you can get the frames where the actual audio data begins and ends.
Extracting the mp3file without it's metadata is fairly easy done by yourself.
ID3v1
The metadata are the last 128 bytes of the file. The metadata always begins with the 3 bytes "TAG" if it exists. Just ignore this last 128bytes.
ID3v2
The metadata can be stored at the beginning or the end of the file. Most implemantations only support the beginning. ID3v2 has a header where the size is stored. The header is always loacted at the beginning of the metadata. There is an optional footer, which is a copy of the header at the end of the metadata. If the metadata is at the end of the file, the footer is required.
The header has the folloing form
ID3v2/file identifier "ID3"
ID3v2 version $04 00
ID3v2 flags %abcd0000
ID3v2 size 4 * %0xxxxxxx
The footer has the following form
ID3v2/file identifier "3DI"
ID3v2 version $04 00
ID3v2 flags %abcd0000
ID3v2 size 4 * %0xxxxxxx
The d bit says, wheter the footer is present. The size is measured without header and footer. Every byte of the size has always the highest bit set. So only 28 of the acutal 32 bits represent the size.
Just compute, which part of the file is not the metadata, and use it for your hashing.
Be aware, if both ID3v1 and ID3v2 are located at the end of the file, ID3v1 is located behind IDv2
The spec can be found at http://www.id3.org/id3v2.4.0-structure.
Isn't the ID3 tag stored either at the end of the file (ID3 v1) in a 128-byte block, or in a block at the start of the file (ID3v2.3 and v2.4) ? (id3.org)
You could use the audio_content method from Mp3Info, and read that much data from the file, though it's probably not much more complicated to have a look in the file yourself and work out where the headers aren't.
Related
I have a pretrained embeddings with word2vec format in txt. I loaded it and then saved it to .bin. But I cannot load this embeddings as an EOFError: unexpected end of input; is count incorrect or file otherwise damaged?
My original code is:
model = KeyedVectors.load_word2vec_format(wordfile)
model.save_word2vec_format("file.bin",binary=True,write_header=True)
bin_model = KeyedVectors.load_word2vec_format("file.bin",binary=True)
And I can load this file.bin with a limit arguement: KeyedVectors.load_word2vec_format("file.bin",binary=True, limit=10000).
Is there some other process needed when I save embeddings?
There's a good chance that your .bin file has an incorrect leading-count, or the file has been otherwise been damaged/truncated – because that error means the file declared in its header (1st line) a larger number of word-vectors than were found during attempted-load.
So, if you downloaded it or copied it from somewhere, check the original source, to make sure you've got the full file.
Is there a reason you're performing this conversion? The formats are essentialy equivalent, and result in the exact same object-in-Python after loading.
If there's any tiny on-disk size savings in binary-format, you could probably save more by GZIPping the file (which the .load_word2vec_format() will also happily decompress, if it sees a trailing .gz on the filename).
I am a newbee to PDFClown and need help in parsing my pdf contents.
My PDF has huge number of MarkedContents which is displayed when converted as Stream.
But i am not able to parse them into objects to extract the Path Information contained within, which is my objective.
Here is my code -
if(level.Contents[i] is MarkedContent)
{
PdfDataObject ContentDataObj = level.Contents.BaseDataObject;
PdfIndirectObject pdfIndirectObject = level.Contents.BaseDataObject.IndirectObject;
PdfStream ContentStream = (PdfStream)ContentDataObj.Resolve();
ContentParser contentParser = new ContentParser(ContentStream.GetBody(true).ToByteArray());
IList<ContentObject> markerContentObjList = contentParser.ParseContentObjects();
//Here i am getting only two Content Objects, where as the stream has so many distinct Marked Contents
for (int k = 0; k < markerContentObjList.Count; k++)
{
}
}
Below is the DOM Inspector screenshot and Stream data
In Short
There are multiple errors in the content streams of your PDF, in particular errors that close more objects than are opened. This most likely is causing the early stop of parsing. Even if it is not, PDF Clown would associate starts and ends of objects differently than intended. Thus, the only real fix of the issue is to ask the source of the documents to provide a non-broken version.
The First Content Stream
The screen shot you provided shows your first page content stream:
The second content stream of that page exhibits the same issues as this one:
Non-Matching Starts and Ends of Marked Content Sequences
If we look at the marked content operators, we see
/OC /Heading BDC
...
EMC
EMC
/OC /Heading BDC
...
EMC
As you can see, there are two EMC operators for the first BDC. This is invalid. Confer ISO 32000-2 section 14.6 Marked content.
Invalid Fill Operator
Furthermore, there is a Fill operator directly following a text object:
BT
...
ET
f
This also is invalid, path painting operators are only allowed after a path object or a clipping path object, not after a text object. Confer ISO 32000-2 Figure 9 Graphics objects.
A Related PDF Clown Issue
Actually there is a bug in PDF Clown which makes processing of marked content with PDF Clown impossible anyway: PDF Clown assumes that marked content sections and save/restore graphics state blocks are properly contained in each other and don't overlap, see this answer for details. This assumption is wrong and results in incorrect graphic state contents as explained in that answer.
Thus, one should patch marked content support out of PDF Clown as explained there to at least have proper graphics state information. Thereafter, obviously, you cannot properly process marked content unless you add correct support for it yourself.
Why PDF Clown Stops at the End of the First Stream
As you observed, PDF Clown stops not after the extra EMC but instead at the end of the first content stream.
This is due to the PDF Clown issue explained above: Based on the assumption that marked content sections and save/restore graphics state blocks are properly contained in each other, PDF Clown simply makes EMC and Q close the most recently opened and still open marked content section or save/restore graphics state block without checking whether it matches alright.
Thus, it matches opening and closing operators in your stream like this:
[Start of page content]
. q
. . /OC /Heading BDC
. . EMC
. EMC
. /OC /Drawing BDC
. EMC
Q
So for PDF Clown that last Q does not match the initial q in the content but the start of page content itself.
I think that PDF Clown stops parsing here because it assumes it has found the end of page contents.
So I used exiftool -all= command line tool to remove the metadata from an image. However, when I print the metadata of the resulting image, I get this:
$ exiftool myimage.jpg
ExifTool Version Number : 11.30
File Name : myimage.jpg
Directory : out
File Size : 2.8 MB
File Modification Date/Time : 2019:05:16 03:34:02-07:00
File Access Date/Time : 2019:05:16 03:34:02-07:00
File Inode Change Date/Time : 2019:05:16 03:34:02-07:00
File Permissions : rw-r--r--
File Type : JPEG
File Type Extension : jpg
MIME Type : image/jpeg
DCT Encode Version : 100
APP14 Flags 0 : [14]
APP14 Flags 1 : (none)
Color Transform : YCbCr
Image Width : 3729
Image Height : 2246
Encoding Process : Baseline DCT, Huffman coding
Bits Per Sample : 8
Color Components : 3
Y Cb Cr Sub Sampling : YCbCr4:4:4 (1 1)
Image Size : 3729x2246
Megapixels : 8.4
I am wondering a few things:
If it is required at some level to have all of this (albeit minimal) metadata. That is, I'm wondering if we could get even more minimal and really remove all metadata.
If we can't remove all the metadata that remains, I'm wondering if I can at least remove the first 3 attributes (ExifTool Version Number, File Name, and Directory).
If any of this is possible, wondering what tool (preferrably command-line tool) could accomplish this.
Almost all that remaining data is not metadata embedded in the file. They are properties of the image or the underlying OS. Or even in the case of ExifTool Version Number, the version of exiftool you are running. The '-g' and '-G' options can show you which group each tag comes from; in particular, group families 0 and 1 can be used to see the tags that come from ExifTool and file attributes, as well as those embedded in the image file itself.
The items such as permissions, file name, directory, and the time stamps are taken directly from the underlying OS. Those are properties of every single file on the drive. Without them, the file itself does not exist.
The file/Mime type entries are properties of the file that exiftool has created when it figured out what kind of file it is.
Except for the APP14 entries, the rest are data about the image itself. How it is encoded, the format of the encoding blocks, the size of the image, etc.
The only thing that is embedded in this image is the APP14 block. This block contains no data that can identify the origin of the image. But there is a chance that removing it will significantly alter the colors of the image (see this post). It can be removed by adding -Adobe:All= to the command.
This problem has bugged me for a while.
I have a jpeg file that is 34.6 kilobytes. Let's call it Image A. Using Ruby, when I copy each line of Image A to a newly created file, called Image B, it is copied exactly. It is exactly the same size as Image A and is accessible.
Here is the code I used:
image_a = File.open('image_a.jpg', 'r')
image_b = File.open('image_b.jpg', 'w+')
image_a.each_line do |l|
image_b.write(l)
end
image_a.close
image_b.close
This code generates a perfect copy of image_a into image_b.
When I try to copy Image A into Image B, byte by byte, it copies successfully but the file size is 88.9 kilobytes rather than the 34.6 kilobytes. I can't access Image B. My mac system alerted me it may be damaged or is using a file format that isn't recognized.
The related code:
//same as before
image_a.each_byte do |b|
image_b.write(b)
end
//same as before
Why is Image B, when copied into byte by byte, larger than Image A? Why is it also damaged in some way, shape, or form? Why is Image A the same size as B, when copied line by line, and accessible?
My guess is the problem is an encoding issue. If so, Why does encoding format matter when copying byte by byte if they translate into the correct code points? Are code points jumbled up into each other so the parser is unable to differentiate between them?
Do \s and \n matter? It seems like it. I did some more research and I found that Image A had 128 lines of code whereas Image B had only one line.
Thanks for reading!
IO#each_byte iterates over bytes (aka Integers). IO#write, however, takes a string as an argument. So it converts the integer to a string via to_s.
Given the first byte in your image is 2551, you'd write the string "255" into image_b. This is why your image_b gets larger. You write number-strings into it.
Try the following when writing back bytes:
image_a.each_byte do |l|
image_b.write l.chr
end
1 As #stefan pointed out jpeg images start with FF D8. So the first byte is 255.
I'm using an opaque API in some ruby code which takes a File/IO as a parameter. I want to be able to pass it an IO object that only gives access to a given range of data in the real IO object.
For example, I have a 8GB file, and I want to give the api an IO object that has a 1GB range within the middle of my real file.
real_file = File.new('my-big-file')
offset = 1 * 2**30 # start 1 GB into it
length = 1 * 2**30 # end 1 GB after start
filter = IOFilter.new(real_file, offset, length)
# The api only sees the 1GB of data in the middle
opaque_api(filter)
The filter_io project looks like it would be the easiest to adapt to do this, but doesn't seem to support this use case directly.
I think you would have to write it yourself, as it seems like a rather specific thing: you would have to implement all (or, a subset that you need) of IO's methods using a chunk of the opened file as a data source. An example of the "speciality" would be writing to such stream - you would have to take care not to cross the boundary of the segment given, i.e. constantly keeping track of your current position in the big file. Doesn't seem like a trivial job, and I don't see any shortcuts that could help you there.
Perhaps you can find some OS-based solution, e.g. making a loopback device out of the part of the large file (see man losetup and particularly -o and --sizelimit options, for example).
Variant 2:
If you are ok with keeping the contents of the window in memory all the time, you may wrap StringIO like this (just a sketch, not tested):
def sliding_io filename, offset, length
File.open(filename, 'r+') do |f|
# read the window into a buffer
f.seek(offset)
buf = f.read(length)
# wrap a buffer into StringIO and pass it given block
StringIO.open(buf) do |buf_io|
yield(buf_io)
end
# write altered buffer back to the big file
f.seek(offset)
f.write(buf[0,length])
end
end
And use it as you would use block variant of IO#open.
I believe the IO object has the functionality you are looking for. I've used it before for MD5 hash summing similarly sized files.
incr_digest = Digest::MD5.new()
file = File.open(filename, 'rb') do |io|
while chunk = io.read(50000)
incr_digest << chunk
end
end
This was the block I used, where I was passing the chunk to the MD5 Digest object.
http://www.ruby-doc.org/core/classes/IO.html#M000918