I have few mp3 files as binary strings with same number of channels and same sample rate. I need to concatenate them in memory without using command line tools.
Currently I just do string concatenation, like this:
out = ''
mp3s.each { |mp3| out << mp3 }
Audio players can play the result, but with some warnings, because mp3 headers were not handled correctly as far as I understand.
Is there a way to proceed the concatenation in more correct way?
After reading this article about MP3 in russian I came up with solution.
You must be able to get complete ID3 specification at http://id3.org/ but it seems to be down at the moment.
Usually Mp3 file have the next format:
[ID3 head(10 bytes) | ID3 tags | MP3 frames ]
ID3 is not part of MP3 format, but it's kind of container which is used to put information like artists, albums, etc...
The audio data itself are stored in MP3 frames.Every frame starts with 4 bytes header which provides meta info (codecs, bitrate, etc).
Every frame has fixed size. So if there are not enough samples at the end of last frame, coder adds silence to make frame have necessary size. I also found there chunks like
LAME3.97 (name and version of coder).
So, all we need to do is to get rid of ID3 container. The following solution works for me perfect, no warnings anymore and out file became smaller:
# Length of header that describes ID3 container
ID3_HEADER_SIZE = 10
# Get size of ID3 container.
# Length is stored in 4 bytes, and the 7th bit of every byte is ignored.
#
# Example:
# Hex: 00 00 07 76
# Bin: 00000000 00000000 00000111 01110110
# Real bin: 111 1110110
# Real dec: 1014
#
def get_id3_size(header)
result = 0
str = header[6..9]
# Read 4 size bytes from left to right applying bit mask to exclude 7th bit
# in every byte.
4.times do |i|
result += (str[i].ord & 0x7F) * (2 ** (7 * (3-i)))
end
result
end
def strip_mp3!(raw_mp3)
# 10 bytes that describe ID3 container.
id3_header = raw_mp3[0...ID3_HEADER_SIZE]
id3_size = get_id3_size(id3_header)
# Offset from which mp3 frames start
offset = id3_size + ID3_HEADER_SIZE
# Get rid of ID3 container
raw_mp3.slice!(0...offset)
raw_mp3
end
# Read raw mp3s
hi = File.binread('hi.mp3')
bye = File.binread('bye.mp3')
# Get rid of ID3 tags
strip_mp3!(hi)
strip_mp3!(bye)
# Concatenate mp3 frames
hi << bye
# Save result to disk
File.binwrite('out.mp3', hi)
Related
My Tcl source files are in utf-8. Tclhttpd would not send national characters properly, so I modified it a bit. However, I also send binary stuff like jpg images and sometimes binary chunks are present in my otherwise utf-8 HTML. I have difficulty calculating the proper Content-length to match exactly what the browser receives (otherwise some trailing characters clobber the next-request headers or the browser keeps waiting 30 sec per request, until a timeout).
In other words, can I please know how many bytes did puts $socket write into the socket?
I have discovered a particular 11-byte sequence that messes up counting:
proc dump3 string {
binary scan $string c* c
binary scan $string H* hex
return [sdump $string]\n$c\n$hex
};#dump3
proc Httpd_ReturnData {sock type content {code 200} {close 0}} {
global Httpd
upvar #0 Httpd$sock data
#...skip non-pertinent code...
set content \x4f\x4e\xc2\x00\x03\xff\xff\x80\x00\x3c\x2f
#content=ONÂÿÿ�</
#79 78 -62 0 3 -1 -1 -128 0 60 47
#4f4ec20003ffff80003c2f
puts content=[dump3 $content]
puts utf8=[dump3 [encoding convertto utf-8 $content]]
if {[catch {
puts "string length=[string length $content] type=$type"
puts "stringblength=[string bytelength $content]"
set len [string length $content]
if [string match -nocase *utf-8* $type] {
fconfigure $sock -encoding utf-8
set len [string bytelength $content]
}
puts "len=$len fcon=[fconfigure $sock]"
HttpdRespondHeader $sock $type $close $len $code
HttpdSetCookie $sock
puts $sock ""
if {$data(proto) != "HEAD"} {
##fconfigure $sock -translation binary -blocking $Httpd(sockblock)
##native: -translation {auto crlf}
fconfigure $sock -translation lf -blocking $Httpd(sockblock)
puts -nonewline $sock $content
}
Httpd_SockClose $sock $close
} err]} {
HttpdCloseFinal $sock $err
}
}
The output on console is:
content=ONÂÿÿ�</
79 78 -62 0 3 -1 -1 -128 0 60 47
4f4ec20003ffff80003c2f
utf8=ON�ÿÿ�</
79 78 -61 -126 0 3 -61 -65 -61 -65 -62 -128 0 60 47
4f4ec3820003c3bfc3bfc280003c2f
string length=11 type=text/html;charset=utf-8
stringblength=17
len=17 fcon=-blocking 0 -buffering full -buffersize 16384 -encoding utf-8 -eofchar {{} {}} -translation {auto crlf} -peername {128.0.0.71 128.0.0.71 55305} -sockname {128.0.0.8 gen 8016}
HttpdRespondHeader 17
The resultant Content-Length: 17 is too much, the browser keeps waiting. If I only could know beforehand, how many bytes puts will make out of my string, the rest would be easy. Is there a way?
For data going over HTTP, the content length should be the number of bytes in the data as observed on the wire. When working with Httpd_ReturnData you need to ensure that you provide it the binary data to transfer; it does not handle encoding the data for you.
To send binary data with a length it's actually easy, and you do:
set binaryData [...]
Httpd_ReturnData $sock "application/octet-stream" $binaryData
# There are many other binary encodings; that's just the most universal one
# Choose the right one for your application, of course
To send text data with a length, you need to do a little more work with encoding convertto:
set textData [...]
Httpd_ReturnData $sock "text/plain; charset=utf-8" \
[encoding convertto utf-8 $textData]
# Similarly, text/plain is a decent fallback here too
(Yes, if you choose a different encoding then you should mention that in both places. You probably ought to use UTF-8 for all text content in this day and age.)
If you can pull the data from a file, you should do so; Httpd_ReturnFile is more efficient than Httpd_ReturnData as it can move the data using efficient data transfer techniques. If sending a text file, you need to be careful to describe the encoding of the file correctly. By far the easiest way to do that is by convention, such as deciding that all text files on your system are UTF-8...
You should virtually never use string bytelength, as that reports in units that are one of Tcl's internal-only encodings (a lightly-denormalized almost-UTF-8). The measure it returns is only correct when you're doing something very weird like generating C code that needs to know buffer sizes that contain strings that will be fed into Tcl's implementation, which is very much not what you're doing (I've only done that sort of thing once in more than 20 years of using Tcl; I've never heard of another legitimate use). I believe it is deprecated precisely because it has a bunch of subtle bugs in how it is used by all too many people.
The 'mdat box' of Mp4 file may at the last of file. I want to know the position of 'mdat' box using 'ffmpeg' or 'ffprobe'.
Mp4 consists of 'ftyp', 'moov' and 'mdat' BOX. each BOX consists of "BoxHeader" and "BoxData". "BoxHeader" consists of "BoxSize(4Byte)", "BoxType(4Byte)", "BoxLargesize(8Byte, only have when box size exceeding the range of 4Byte expression, then the value of BoxSize is 1)".
In program, you could first read 8 Byte and know the size of 'ftyp box', then seek the size and read 8 Byte to know if the next box is 'moov box'. If not 'moov', it shoud be 'mdat box', then seek cross 'mdat box' to find 'mdat box'...
But I want to use 'ffprobe' to find the position of 'moov'. I use 'ffprobe -v trace demo.mp4', and output is like below
[mov,mp4,m4a,3gp,3g2,mj2 # 0x7fc8fd000e00] Format mov,mp4,m4a,3gp,3g2,mj2 probed with size=2048 and score=100
[mov,mp4,m4a,3gp,3g2,mj2 # 0x7fc8fd000e00] type:'ftyp' parent:'root' sz: 28 8 41044500
[mov,mp4,m4a,3gp,3g2,mj2 # 0x7fc8fd000e00] ISO: File Type Major Brand: mp42
[mov,mp4,m4a,3gp,3g2,mj2 # 0x7fc8fd000e00] type:'moov' parent:'root' sz: 17943 36 41044500
[mov,mp4,m4a,3gp,3g2,mj2 # 0x7fc8fd000e00] type:'mvhd' parent:'moov' sz: 108 8 17935
I want to know the meaning of type:'ftyp' parent:'root' sz: 28 8 41044500:
type:'ftyp' parent:'root'is easy to know, sz: 28 8 41044500 is really make me confused, I guess 28 is size of ftyp box,but the meaning of 8 41044500 is what?
Could you explain the meaning of sz: 28 8 41044500, and where could find the doc?
Consider
type:'mvhd' parent:'moov' sz: 108 8 17935
type and parent represent the type of the current and parent box respectively.
There are three values for sz (size).
The first value, 108 represents the total size of the current box, including the header.
The second value, 8, represents the starting offset of the box data relative to the start of the box header. This is needed because box size can be 8 bytes and box type can have a UUID, in which case, may be up to 20 bytes long. This offset will be non-zero even if the box has no data e.g. free.
The third value, 17935, is the data size of the parent box.
I have a question, how to restore PDF file, if all I have is the only ASCII output?
Example:
%PDF-1.3
%���������
4 0 obj
<< /Length 5 0 R /Filter /FlateDecode >>
stream
x�ѽ
�0�ݧ8O�����[�AAqp� �jK|{S�"�f�2���[�
�(M#���#�FFIw�=*��?J4'�P�y^TP`�Q�
+�i�E�8ψ�g���º��(6�֭,���s0�T��ZL�~�e�.EA��`J�f��<��M�
[...]
0000120481 00000 n
0000122448 00000 n
trailer
<</Size 94 /Root 57 0 R /Prev 116103 /Info 1 0 R>>
startxref
122488
%%EOF
It's the beginning and end of output I have and I need to restore it back into a readable form. I tried a few things, but I was unlucky.
It is impossible, the information was lost.
You can't represent binary data as a printable text using ASCII encoding in the 'One Byte' to 'One Char' ratio.
There are many non-printable characters in the ASCII table that could be supressed when converting the pdf binary file contents, destroying the original data.
Quoted-Printable encoding and Base64 encoding are more suitable for such application.
Check this out: Binary-to-text_encoding
I'm trying to print the first 5 lines from a set of large (>500MB) csv files into small headers in order to inspect the content more easily.
I'm using Ruby code to do this but am getting each line padded out with extra Chinese characters, like this:
week_num type ID location total_qty A_qty B_qty count㌀㐀ऀ猀漀爀琀愀戀氀攀ऀ㤀㜀ऀ䐀䔀开伀渀氀礀ऀ㔀㐀㜀㈀ ㌀ऀ㔀㐀㜀㈀ ㌀ऀ ऀ㤀㈀㔀㌀ഀ
44 small 14 A 907859 907859 0 550360㐀ऀ猀漀爀琀愀戀氀攀ऀ㐀㈀ऀ䐀䔀开伀渀氀礀ऀ㌀ ㈀㜀㐀ऀ㌀ ㈀
The first few lines of input file are like so:
week_num type ID location total_qty A_qty B_qty count
34 small 197 A 547203 547203 0 91253
44 small 14 A 907859 907859 0 550360
41 small 421 A 302174 302174 0 18198
The strange characters appear to be Line 1 and Line 3 of the data.
Here's my Ruby code:
num_lines=ARGV[0]
fh = File.open(file_in,"r")
fw = File.open(file_out,"w")
until (line=fh.gets).nil? or num_lines==0
fw.puts line if outflag
num_lines = num_lines-1
end
Any idea what's going on and what I can do to simply stop at the line end character?
Looking at input/output files in hex (useful suggestion by #user1934428)
Input file - each character looks to be two bytes.
Output file - notice the NULL (00) between each single byte character...
Ruby version 1.9.1
The problem is an encoding mismatch which is happening because the encoding is not explicitly specified in the read and write parts of the code. Read the input csv as a binary file "rb" with utf-16le encoding. Write the output in the same format.
num_lines=ARGV[0]
# ****** Specifying the right encodings <<<< this is the key
fh = File.open(file_in,"rb:utf-16le")
fw = File.open(file_out,"wb:utf-16le")
until (line=fh.gets).nil? or num_lines==0
fw.puts line
num_lines = num_lines-1
end
Useful references:
Working with encodings in Ruby 1.9
CSV encodings
Determining the encoding of a CSV file
How can I know if a TIFF image is in the format CCITT T.6(Group 4)?
You can use this (C#) code example.
It returns a value indicating the compression type:
1: no compression
2: CCITT Group 3
3: Facsimile-compatible CCITT Group 3
4: CCITT Group 4 (T.6)
5: LZW
public static int GetCompressionType(Image image)
{
int compressionTagIndex = Array.IndexOf(image.PropertyIdList, 0x103);
PropertyItem compressionTag = image.PropertyItems[compressionTagIndex];
return BitConverter.ToInt16(compressionTag.Value, 0);
}
You can check these links
The TIFF File Format
TIFF Tag Compression
TIFF File Format Summary
The tag 259 (hex 0x0103) store the info about the Compression method.
--- Compression
Tag = 259 (103)
Type = word
N = 1
Default = 1.
1 = No compression, but pack data into bytes as tightly as possible, with no
unused bits except at the end of a row. The bytes are stored as an array
of bytes, for BitsPerSample <= 8, word if BitsPerSample > 8 and <= 16, and
dword if BitsPerSample > 16 and <= 32. The byte ordering of data >8 bits
must be consistent with that specified in the TIFF file header (bytes 0
and 1). Rows are required to begin on byte boundaries.
2 = CCITT Group 3 1-Dimensional Modified Huffman run length encoding.
See ALGRTHMS.txt BitsPerSample must be 1, since this type of compression
is defined only for bilevel images (like FAX images...)
3 = Facsimile-compatible CCITT Group 3, exactly as specified in
"Standardization of Group 3 facsimile apparatus for document
transmission," Recommendation T.4, Volume VII, Fascicle VII.3,
Terminal Equipment and Protocols for Telematic Services, The
International Telegraph and Telephone Consultative Committee
(CCITT), Geneva, 1985, pages 16 through 31. Each strip must
begin on a byte boundary. (But recall that an image can be a
single strip.) Rows that are not the first row of a strip are
not required to begin on a byte boundary. The data is stored as
bytes, not words - byte-reversal is not allowed. See the
Group3Options field for Group 3 options such as 1D vs 2D coding.
4 = Facsimile-compatible CCITT Group 4, exactly as specified in
"Facsimile Coding Schemes and Coding Control Functions for Group
4 Facsimile Apparatus," Recommendation T.6, Volume VII, Fascicle
VII.3, Terminal Equipment and Protocols for Telematic Services,
The International Telegraph and Telephone Consultative Committee
(CCITT), Geneva, 1985, pages 40 through 48. Each strip must
begin on a byte boundary. Rows that are not the first row of a
strip are not required to begin on a byte boundary. The data is
stored as bytes, not words. See the Group4Options field for
Group 4 options.
5 = LZW Compression, for grayscale, mapped color, and full color images.
You can run identify -verbose from the ImageMagick suite on the image. Look for "Compression: Group4" in the output.
UPDATE:
SO, I downloaded the libtiff library from the link I mentioned before, and from what I've seen, you can do the following: (untested)
int isTIFF_T6(const char* filename)
{
TIFF* tif= TIFFOpen(filename,"r");
TIFFDirectory *td = &tif->tif_dir;
if(td->td_compression == COMPRESSION_CCITTFAX4) return 1;
return 0;
}
PREVIOUS:
This page has a lot of information about this format and links to some code in C:
Here's an excerpt:
The following paper covers T.4, T.6
and JBIG:
"Review of standards for electronic
imaging for facsimile systems" in
Journal of Electronic Imaging, Vol. 1,
No. 1, pp. 5-21, January 1992.
Source code can be obtained as part of
a TIFF toolkit - TIFF image
compression techniques for binary
images include CCITT T.4 and T.6:
ftp://ftp.sgi.com/graphics/tiff/tiff-v3.4beta035-tar.gz
Contact: sam#engr.sgi.com
Read more: http://www.faqs.org/faqs/compression-faq/part1/section-16.html#ixzz0TYLGKnHI