SuperCollider: load a mono Buffer from a single channel of a multichannel file - supercollider

Is there any way to load a mono buffer by reading a single channel from an stereo or maybe a multichannel file?
thx!
[asked on behalf of someone else]

Yes: Buffer has a "readChannel" method that does exactly what you ask.
e.g. to load channel 3:
b = Buffer.readChannel(s, pathToAudioFile, channels: [3]);

Related

Streaming hls video from s3

I need have hls video chunks in s3 bucket, I need to stream it to frontend. From front side it is fairly easy-they just send get request to video/filename and I need to give the, file back. The thing is that if I do it in standard way then it will download to my server file, saving it to buffer, than it will send to front, that's not very good. Instead of this I want to "stream" it, so when, say, 1000 bytes arrive I send it immediately to front, not waiting for complete donwload.
The question is how can I do this? I thought that if I use copy(responseWriter, response.Body), than it will actually send response, but will it stream? Should I use reverse proxy? Is there any solution using fasthttp?
io.Copy uses a 32kB buffer internally. If this is too big for your streaming (or if you want to rate limit your streaming) just implement that for loop which reads from upstream and writes to downstream yourself. Peeking at the implementation of io.Copy is trivial, so what is your actual question?

Resize MFT Issues: Video Composition in Windows Media Foundation

I'm trying to do composition with two separate video sources in Media Foundation. I am attempting to encode a video with a video overlay. To do so I am attempting to use the Video Resizer on the smaller input.
I've seen several threads on this, but I thought I'd ask around in any case.
Basically the idea is to create two source readers and a sink writer. The source files are h264, so I use the reader to decode into YUY2. While processing samples, I send the appropriate sample to the Resize MFT, then down the line (I haven't made it this far) I combine the two images to create the overlay effect with MFCopyImage.
My question is: I am getting an E_INVALIDARG when I call ProcessInput on the Resize MFT.
To initialize the mft, I am giving it the appropriate type from the reader via SetInput Type. After that I am setting all the appropriate properties via the PropertyStore, and then updating the framesize for the output type of the MFT. I have read the documentation and modeled my implementation according to the MFT Processing Model.
None of these steps raise any red flags until I actually attempt to use ProcessInput.
Although I have limited experience in Windows Media Foundation, I have been able to use the Framerate DSP with success. I would appreciate any advice.
Thank you!
For anyone else stuck in a similar situation, I ended up not using the Resizer MFT but the Video Processor MFT which worked with much less effort.

FFMpeg - Is it difficultt to use

I am trying to use ffmpeg, and have been doing a lot of experiment last 1 month.
I have not been able to get through. Is it really difficult to use FFmpeg?
My requirement is simple as below.
Can you please guide me if ffmpeg is suitable one or I have implement on my own (using codec libs available).
I have a webm file (having VP8 and OPUS frames)
I will read the encoded data and send it to remote guy
The remote guy will read the encoded data from socket
The remote guy will write it to a file (can we avoid decoding).
Then remote guy should be able to pay the file using ffplay or any player.
Now I will take a specific example.
Say I have a file small.webm, containing VP8 and OPUS frames.
I am reading only audio frames (OPUS) using av_read_frame api (Then checks stream index and filters audio frames only)
So now I have data buffer (encoded) as packet.data and encoded data buffer size as packet.size (Please correct me if wrong)
Here is my first doubt, everytime audio packet size is not same, why the difference. Sometimes packet size is as low as 54 bytes and sometimes it is 420 bytes. For OPUS will frame size vary from time to time?
Next say somehow extract a single frame (really do not know how to extract a single frame) from packet and send it to remote guy.
Now remote guy need to write the buffer to a file. To write the file we can use av_interleaved_write_frame or av_write_frame api. Both of them takes AVPacket as argument. Now I can have a AVPacket, set its data and size member. Then I can call av_write_frame api. But that does not work. Reason may be one should set other members in packet like ts, dts, pts etc. But I do not have such informations to set.
Can somebody help me to learn if FFmpeg is the right choice, or should I write a custom logic like parse a opus file and get frame by frame.
Now remote guy need to write the buffer to a file. To write the file
we can use av_interleaved_write_frame or av_write_frame api. Both of
them takes AVPacket as argument. Now I can have a AVPacket, set its
data and size member. Then I can call av_write_frame api. But that
does not work. Reason may be one should set other members in packet
like ts, dts, pts etc. But I do not have such informations to set.
Yes, you do. They were in the original packet you received from the demuxer in the sender. You need to serialize all information in this packet and set each value accordingly in the receiver.

FFMPEG libavformat internal buffering

I'm using FFMPEG for a C++ audio streaming and playback application.
I use the avformat_open_input function to open an URL to an external compressed audio file and then I step through to stream using av_read_frame. Then for each packet i directly decode the data and queue it in the audio buffer using OpenAL.
My question is if FFMPEG internally prebuffers compressed data from the external URL?
Does FFMPEG keep downloading data in the background even if I don't call av_read_frame?
Or is it my responsibility to maintain a intermediate buffer where I download as many packets as possible ahead of time to avoid starving the audio-playback?
If so, how much does it buffer/download internally? Can I configure this?
I have been looking through the documentation but have not found any information on this.
Thanks.
Update:
According to this thread http://ffmpeg.zeranoe.com/forum/viewtopic.php?f=15&t=376 libav should by default prebuffer about 5MB depending on AVFormatContext::max_analyze_duration. However I haven't noticed this behavior and it doesn't seem to change if I alter max_analyze_duration.
If I monitor the memory consumption of my process it doesn't increase after I call avformat_open_input and if I simulate slow-network, av_read_frame directly stops working like if it didn't have any packets buffered.

How to write encode h264 to a byte array other than a file

I'm using a MSDN tutorial to encode RAW RGB32 frame to an h264 videon this first part works without any problem. ( http://msdn.microsoft.com/en-us/library/ff819477%28v=VS.85%29.aspx)
But, there is one think that i can do : I just want to write the output encoded video to a BYTE array other than the file, i have read about 400 different web pages and all the Media Foundation documentation, but i don't see how to do that !!
I have try many different way, life using MFCreateTempFile and work with the IMFByteStream but there is nothing to do !
After i have try with it :
http://msdn.microsoft.com/en-us/library/windows/desktop/ms698913%28v=VS.85%29.aspx
But my buffer is empty !
Please help me !! I'm losing my eyes !!
H.264 Video Encoder is an MFT, that is it exposes IMFTransform interface and does not necessarily need to participate in a session. You can instantiate it standalone, set it up and get raw H.264 encoded data from its ProcessOutput method.

Resources