upload non seekable streams in minio using dotnet client - minio

I create streams on the fly and I want to upload then to a minio storage.
However PutObjectAsync expects the length of the stream, which is unknown, as the stream is non seekable.
Is there any workaround about this ?
Of course I could read my stream fully into a memorystream and use that instead, but beacause of the size of the streams (several gigaBs) that is not an option.

Related

Streaming hls video from s3

I need have hls video chunks in s3 bucket, I need to stream it to frontend. From front side it is fairly easy-they just send get request to video/filename and I need to give the, file back. The thing is that if I do it in standard way then it will download to my server file, saving it to buffer, than it will send to front, that's not very good. Instead of this I want to "stream" it, so when, say, 1000 bytes arrive I send it immediately to front, not waiting for complete donwload.
The question is how can I do this? I thought that if I use copy(responseWriter, response.Body), than it will actually send response, but will it stream? Should I use reverse proxy? Is there any solution using fasthttp?
io.Copy uses a 32kB buffer internally. If this is too big for your streaming (or if you want to rate limit your streaming) just implement that for loop which reads from upstream and writes to downstream yourself. Peeking at the implementation of io.Copy is trivial, so what is your actual question?

Convert webm (or any other) format's chunks to mp4

Is it possible to get webm ( or other format ) chucks from a http post (upload) on my sever (i know how to do this).... then feed them as chucks (chunks recieved from browser) to gstreamer or ffmpeg to be converted to mp4 with reduced quality without loading the entire file in memory or to disk before saving the converted mp4? Why I dont want them to be loaded fully into memory or disk? scalability
Yes, you can feed ffmpeg one frame at a time without keeping the whole video file locally. You can read chunks of data from http stream and give them to ffmpeg library to decode. Here is an official example.

FFMpeg - Is it difficultt to use

I am trying to use ffmpeg, and have been doing a lot of experiment last 1 month.
I have not been able to get through. Is it really difficult to use FFmpeg?
My requirement is simple as below.
Can you please guide me if ffmpeg is suitable one or I have implement on my own (using codec libs available).
I have a webm file (having VP8 and OPUS frames)
I will read the encoded data and send it to remote guy
The remote guy will read the encoded data from socket
The remote guy will write it to a file (can we avoid decoding).
Then remote guy should be able to pay the file using ffplay or any player.
Now I will take a specific example.
Say I have a file small.webm, containing VP8 and OPUS frames.
I am reading only audio frames (OPUS) using av_read_frame api (Then checks stream index and filters audio frames only)
So now I have data buffer (encoded) as packet.data and encoded data buffer size as packet.size (Please correct me if wrong)
Here is my first doubt, everytime audio packet size is not same, why the difference. Sometimes packet size is as low as 54 bytes and sometimes it is 420 bytes. For OPUS will frame size vary from time to time?
Next say somehow extract a single frame (really do not know how to extract a single frame) from packet and send it to remote guy.
Now remote guy need to write the buffer to a file. To write the file we can use av_interleaved_write_frame or av_write_frame api. Both of them takes AVPacket as argument. Now I can have a AVPacket, set its data and size member. Then I can call av_write_frame api. But that does not work. Reason may be one should set other members in packet like ts, dts, pts etc. But I do not have such informations to set.
Can somebody help me to learn if FFmpeg is the right choice, or should I write a custom logic like parse a opus file and get frame by frame.
Now remote guy need to write the buffer to a file. To write the file
we can use av_interleaved_write_frame or av_write_frame api. Both of
them takes AVPacket as argument. Now I can have a AVPacket, set its
data and size member. Then I can call av_write_frame api. But that
does not work. Reason may be one should set other members in packet
like ts, dts, pts etc. But I do not have such informations to set.
Yes, you do. They were in the original packet you received from the demuxer in the sender. You need to serialize all information in this packet and set each value accordingly in the receiver.

FFMPEG libavformat internal buffering

I'm using FFMPEG for a C++ audio streaming and playback application.
I use the avformat_open_input function to open an URL to an external compressed audio file and then I step through to stream using av_read_frame. Then for each packet i directly decode the data and queue it in the audio buffer using OpenAL.
My question is if FFMPEG internally prebuffers compressed data from the external URL?
Does FFMPEG keep downloading data in the background even if I don't call av_read_frame?
Or is it my responsibility to maintain a intermediate buffer where I download as many packets as possible ahead of time to avoid starving the audio-playback?
If so, how much does it buffer/download internally? Can I configure this?
I have been looking through the documentation but have not found any information on this.
Thanks.
Update:
According to this thread http://ffmpeg.zeranoe.com/forum/viewtopic.php?f=15&t=376 libav should by default prebuffer about 5MB depending on AVFormatContext::max_analyze_duration. However I haven't noticed this behavior and it doesn't seem to change if I alter max_analyze_duration.
If I monitor the memory consumption of my process it doesn't increase after I call avformat_open_input and if I simulate slow-network, av_read_frame directly stops working like if it didn't have any packets buffered.

Osx: Core Audio: Parse raw, compressed audio data with AudioToolbox (to get PCM)

I am downloading various sound files with my own c++ http client (i.e. mp3's, aiff's etc.). Now I want to parse them using Core Audio's AudioToolbox, to get linear PCM data for playback with i.e. OpenAL. According to this document: https://developer.apple.com/library/mac/#documentation/MusicAudio/Conceptual/CoreAudioOverview/ARoadmaptoCommonTasks/ARoadmaptoCommonTasks.html , it should be possible to also create an audio file from memory. Unfortunately I didn't find any way of doing this when browsing the API, so what is the common way to do this? Please don't say that I should save the file to my hard drive first.
Thank you!
I have done this using an input memory buffer, avoiding any files, in my case I started with AAC audio format and used apple's api : AudioConverterFillComplexBuffer to do the hardware decompress into LPCM. The trick is you have to define a callback function to supply each packet of input data. That api call does the format conversion on a per packet basis. In my case I had to write code to parse the compressed AAC data to identify packet starts (0xfff) then use the callback to spoon feed each packet into the api call. I am also using OpenAL for audio rendering which has its own challenges to avoid using input files.

Resources