What is the appropriate value for the ExtAudioFileWrite inNumberFrames parameter? - macos

I'm working on a FLAC-to-ALAC transcoder and trying to use ExtAudioFile to write to ALAC. I am using the FLAC library's callback-based system to read in the FLAC file, which means that every frame in the FLAC file results in a function call. Within that call, I set up my buffers and call the ExtAudioFileWrite code as follows:
AudioBufferList * fillBufList;
fillBufList = malloc(sizeof *fillBufList + (frame->header.channels - 1) * sizeof fillBufList->mBuffers[0]);
fillBufList->mNumberBuffers = frame->header.channels;
for (int i = 0; i < fillBufList->mNumberBuffers; i++) {
fillBufList->mBuffers[i].mNumberChannels = 1; // non-interleaved
fillBufList->mBuffers[i].mDataByteSize = frame->header.blocksize * frame->header.bits_per_sample / 8;
fillBufList->mBuffers[i].mData = (void *)(buffer[i]);
NSLog(#"%i", fillBufList->mBuffers[i].mDataByteSize);
}
OSStatus err = ExtAudioFileWrite (outFile, 1, fillBufList);
Now, the number 1 in the final line is something of a magic number that I've chosen because I figured that one frame in the FLAC file would probably correspond to one frame in the corresponding ALAC file, but this seems not to be the case. Every call to ExtAudioFileWrite returns an error value of -50 (error in user parameter list). The obvious culprit is the value I'm providing for the frame parameter.
So I ask, then, what value should I be providing?
Or am I barking up the wrong tree?
(Side note: I suspected that, despite the param-related error value, the true problem could be the buffer setup, so I tried mallocing a zeroed-out dummy buffer just to see what would happen. Same error.)

For ExtAudioFileWrite, the number of frames is equal to the number of samples you want to write. If you're working with 32-bit float interleaved data, it would be the mDataByteSize / (sizeof(Float32) / mNumberChannels). It shouldn't be 1 unless you meant to write only one sample, and if you're writing a compressed format, it expects a certain number of samples, I think. It's also possible that the -50 error is another issue.
One thing to check is that ExtAudioFile expects only one buffer. So your fillBufList->mNumberBuffers should always equal 1, and if you need to do stereo, you need to interleave the audio data, so that mBuffers[0].mNumberChannels is equal to 2 for stereo.

Related

Reimplementing QSerialPort canReadLine() and readLine() methods

I am trying to receive custom framed raw bytes via QSerialPort using value 0 as delimiter in asynchronous mode (using signals instead of polling).
The inconvenience is that QSerialPort doesn't seem to have a method that can read serial data until a specified byte value is encountered e.g. read_until (delimiter_value) in pyserial.
I was wondering if it's possible to reimplement QSerialPort's readLine() function in Python so that it reads until 0 byte value is encountered instead of '\n'. Similarly, it would be handy to reimplement canReadLine() as well.
I know that it is possible to use readAll() method and then parse the data for delimiter value. But this approach likely implies more code and decrease in efficiency. I would like to have the lowest overhead possible when processing the frames (serial baud rate and number of incoming bytes are large). However, if you know a fast approach to do it, I would like to take a look.
I ended up parsing the frame, it seems to work well enough.
Below is a method extract from my script which receives and parses serial data asynchronously. self.serial_buffer is a QByteArray array initialized inside a custom class init method. You can also use a globally declared bytearray but you will have to check for your delimiter value in another way.
#pyqtSlot()
def receive(self):
self.serial_buffer += self.serial.readAll() # Read all data from serial buffer
start_pos, del_pos = 0, 0
while True:
del_pos = self.serial_buffer.indexOf(b'\x00', start_pos) # b'\x00' is delimiter byte
if del_pos == -1: break # del_pos is -1 if b'\x00' is not found
frame = self.serial_buffer[start_pos: del_pos] # Copy data until delimiter
start_pos = del_pos + 1 # Exclude old delimiter from your search
self.serial_buffer = self.serial_buffer[start_pos:] # Copy remaining data excluding frame
self.process_frame(frame) # Process frame

FFmpeg get video length when duration is equal to "AV_NOPTS_VALUE"

There's a couple of video I have where the duration value in the AVStream is set so AV_NOPTS_VALUE. But players like VLC are able to get the length of that video. Even the file property in Ubuntu can read it.
So when this happens, what should I do to get the file length? Either in number of frames or in seconds, doesn't really matter.
Thanks
P.S.: only with the API, not interested in calling FFmpeg in the command line.
So I continued my research and found a solution:
// Seek last key-frame.
avcodec_flush_buffers(stream._codecContext);
av_seek_frame(_context, stream._idx, stream.frameToPts(1<<29), AVSEEK_FLAG_BACKWARD);
// Read up to last frame, extending max PTS for every valid PTS value found for the video stream.
av_init_packet(&_avPacket);
while (av_read_frame(_context, &_avPacket) >= 0) {
if (_avPacket.stream_index == stream._idx && _avPacket.pts != int64_t(AV_NOPTS_VALUE) && _avPacket.pts > maxPts)
maxPts = _avPacket.pts;
av_free_packet(&_avPacket);
}
I changed it a bit a fit my needs, but this is roughly what I used.
Ref: ffmpegReader.cpp, look for function getStreamFrames.

Why should we use OutputStream.write(byte[] b, int off, int len) instead of OutputStream.write(byte[] b)?

Sorry, everybody. It's a Java beginner question, but I think it will be helpful for a lot of java learners.
FileInputStream fis = new FileInputStream(file);
OutputStream os = socket.getOutputStream();
byte[] buffer = new byte[1024];
int len;
while((len=fis.read(buffer)) != -1){
os.write(buffer, 0, len);
}
The code above is part of FileSenderClient class which is for sending files from client to a server using java.io and java.net.Socket.
My question is that: in the above code, why should we use
os.write(buffer, 0, len)
instead of
os.write(buffer)
In another way to ask this question: what is the point of having a "len" parameter for "OutputStream.write()" method?
It seems both codes are working fine.
while((len=fis.read(buffer)) != -1){
os.write(buffer, 0, len);
}
Because you only want to write data that you actually read. Consider the case where the input consists of N buffers plus one byte. Without the len parameter you would write (N+1)*1024 bytes instead of N*1024+1 bytes. Consider also the case of reading from a socket, or indeed the general case of reading: the actual contract of InputStream.read() is that it transfers at least one byte, not that it fills the buffer. Often it can't, for one reason or another.
It seems both codes are working fine.
No they're not.
It actually does not work in the same way.
It is very likely you used a very small text file to test. But if you look carefully, you will still find there is a lot of extra spaces in the end of you file you received, and the size of the file you received is larger than the file you send.
The reason is that you have created a byte array in a size of 1024 but you don't have so many data to put (or read()) into that byte array. Therefore, the byte array is full with NULL in the end part. When it comes to writing to file, these NULLs are still written into the file and show as spaces " " in Windows Notepad...
If you use advanced text editors like Notepad++ or Sublime Text to view the file you received, you will see these NULL characters.

Crash when casting the result of arc4random() to Int

I've written a simple Bag class. A Bag is filled with a fixed ratio of Temperature enums. It allows you to grab one at random and automatically refills itself when empty. It looks like this:
class Bag {
var items = Temperature[]()
init () {
refill()
}
func grab()-> Temperature {
if items.isEmpty {
refill()
}
var i = Int(arc4random()) % items.count
return items.removeAtIndex(i)
}
func refill() {
items.append(.Normal)
items.append(.Hot)
items.append(.Hot)
items.append(.Cold)
items.append(.Cold)
}
}
The Temperature enum looks like this:
enum Temperature: Int {
case Normal, Hot, Cold
}
My GameScene:SKScene has a constant instance property bag:Bag. (I've tried with a variable as well.) When I need a new temperature I call bag.grab(), once in didMoveToView and when appropriate in touchesEnded.
Randomly this call crashes on the if items.isEmpty line in Bag.grab(). The error is EXC_BAD_INSTRUCTION. Checking the debugger shows items is size=1 and [0] = (AppName.Temperature) <invalid> (0x10).
Edit Looks like I don't understand the debugger info. Even valid arrays show size=1 and unrelated values for [0] =. So no help there.
I can't get it to crash isolated in a Playground. It's probably something obvious but I'm stumped.
Function arc4random returns an UInt32. If you get a value higher than Int.max, the Int(...) cast will crash.
Using
Int(arc4random_uniform(UInt32(items.count)))
should be a better solution.
(Blame the strange crash messages in the Alpha version...)
I found that the best way to solve this is by using rand() instead of arc4random()
the code, in your case, could be:
var i = Int(rand()) % items.count
This method will generate a random Int value between the given minimum and maximum
func randomInt(min: Int, max:Int) -> Int {
return min + Int(arc4random_uniform(UInt32(max - min + 1)))
}
The crash that you were experiencing is due to the fact that Swift detected a type inconsistency at runtime.
Since Int != UInt32 you will have to first type cast the input argument of arc4random_uniform before you can compute the random number.
Swift doesn't allow to cast from one integer type to another if the result of the cast doesn't fit. E.g. the following code will work okay:
let x = 32
let y = UInt8(x)
Why? Because 32 is a possible value for an int of type UInt8. But the following code will fail:
let x = 332
let y = UInt8(x)
That's because you cannot assign 332 to an unsigned 8 bit int type, it can only take values 0 to 255 and nothing else.
When you do casts in C, the int is simply truncated, which may be unexpected or undesired, as the programmer may not be aware that truncation may take place. So Swift handles things a bit different here. It will allow such kind of casts as long as no truncation takes place but if there is truncation, you get a runtime exception. If you think truncation is okay, then you must do the truncation yourself to let Swift know that this is intended behavior, otherwise Swift must assume that is accidental behavior.
This is even documented (documentation of UnsignedInteger):
Convert from Swift's widest unsigned integer type,
trapping on overflow.
And what you see is the "overflow trapping", which is poorly done as, of course, one could have made that trap actually explain what's going on.
Assuming that items never has more than 2^32 elements (a bit more than 4 billion), the following code is safe:
var i = Int(arc4random() % UInt32(items.count))
If it can have more than 2^32 elements, you get another problem anyway as then you need a different random number function that produces random numbers beyond 2^32.
This crash is only possible on 32-bit systems. Int changes between 32-bits (Int32) and 64-bits (Int64) depending on the device architecture (see the docs).
UInt32's max is 2^32 − 1. Int64's max is 2^63 − 1, so Int64 can easily handle UInt32.max. However, Int32's max is 2^31 − 1, which means UInt32 can handle numbers greater than Int32 can, and trying to create an Int32 from a number greater than 2^31-1 will create an overflow.
I confirmed this by trying to compile the line Int(UInt32.max). On the simulators and newer devices, this compiles just fine. But I connected my old iPod Touch (32-bit device) and got this compiler error:
Integer overflows when converted from UInt32 to Int
Xcode won't even compile this line for 32-bit devices, which is likely the crash that is happening at runtime. Many of the other answers in this post are good solutions, so I won't add or copy those. I just felt that this question was missing a detailed explanation of what was going on.
This will automatically create a random Int for you:
var i = random() % items.count
i is of Int type, so no conversion necessary!
You can use
Int(rand())
To prevent same random numbers when the app starts, you can call srand()
srand(UInt32(NSDate().timeIntervalSinceReferenceDate))
let randomNumber: Int = Int(rand()) % items.count

Is it possible to send several different datatypes at once with boost::asio without casting?

At the moment I'm filling an std::vector with all of my data and then sending it with async_write. All of the packets I send have a 2 byte header and this tells receiver how much further to read (if any further at all). The code which generates this std::vector is:
std::vector<boost::asio::const_buffer> BasePacket::buffer()
{
std::vector<boost::asio::const_buffer> buffers;
buffers.push_back(boost::asio::buffer(headerBytes_)); // This is just a boost::array<uint8_t, 2>
return buffers;
}
std::vector<boost::asio::const_buffer> UpdatePacket::buffer()
{
printf("Making an update packet into a buffer.\n");
std::vector<boost::asio::const_buffer> buffers = BasePacket::buffer();
boost::array<uint16_t, 2> test = { 30, 40 };
buffers.push_back(boost::asio::buffer(test));
return buffers;
}
This is read by:
void readHeader(const boost::system::error_code& error, size_t bytesTransferred)
{
if(error)
{
printf("Error reading header: %s\n", error.message().c_str());
return;
}
// At this point 2 bytes have been read into boost::array<uint8_t, 2> header
uint8_t primeByte = header.data()[0];
uint8_t supByte = header.data()[1];
switch(primeByte)
{
// Unrelated case removed
case PACKETHEADER::UPDATE:
// Read the first 4 bytes as two 16-bit numbers representing the size of
// the update
boost::array<uint16_t, 2> buf;
printf("Attempting to read the first two Uint16's.\n");
boost::asio::read(mySocket, boost::asio::buffer(buf));
printf("The update has size %d x %d\n", buf.data()[0], buf.data()[1]);
break;
}
// Keep listening
boost::asio::async_read(mySocket, boost::asio::buffer(header),
boost::bind(readHeader, boost::asio::placeholders::error, boost::asio::placeholders::bytes_transferred));
}
The code compiles, however it doesn't return 30 x 40 as I would expect. Instead it returns
188 x 40. If I stretch the second array out only the first byte is messed up. However, if I add a third array before sending (but still read the send amount), the values of the second array all get messed up. I'm guessing that this could be related to how I'm reading it (in chunks into one buffer rather than similar to how I'm writing it).
Ideally I'd like to avoid having to cast everything into bytes and read/write that way, since it's less clear and probably less portable, but I know that's an option. However, if there is a better way I'm fine rewriting what I have.
The first problem I see is a lifetime issue with the data you are sending. asio::buffers simply wrap a data buffer that you continue to own.
The UpdatePacket::buffer() method creates a boost::array which it wraps and then pushes back on the buffers std::vector. When the method exits the boost::array goes out of scope and the asio::buffer is now pointing to garbage.
There maybe other issues, but this is a good start. Mind the lifetimes of your data buffers in Asio.

Resources