I need have hls video chunks in s3 bucket, I need to stream it to frontend. From front side it is fairly easy-they just send get request to video/filename and I need to give the, file back. The thing is that if I do it in standard way then it will download to my server file, saving it to buffer, than it will send to front, that's not very good. Instead of this I want to "stream" it, so when, say, 1000 bytes arrive I send it immediately to front, not waiting for complete donwload.
The question is how can I do this? I thought that if I use copy(responseWriter, response.Body), than it will actually send response, but will it stream? Should I use reverse proxy? Is there any solution using fasthttp?
io.Copy uses a 32kB buffer internally. If this is too big for your streaming (or if you want to rate limit your streaming) just implement that for loop which reads from upstream and writes to downstream yourself. Peeking at the implementation of io.Copy is trivial, so what is your actual question?
Related
What strategies can I use for downloading audio/mpeg data from a never-ending stream and chunking what’s downloaded when there are periods of silence?
Currently I’m using timeout and wget to download N-second chunks of data. This isn’t ideal since continuous content extends beyond the N-second window boundary and consequently sounds like an interruption during playback.
I’m wondering if there’s some way I can continuously download the stream to a temporary file, read it in parallel, evaluate silence gap candidates, and copy chunks of data between the found silence gaps. I don’t know how to do this though.
I've been struggling with the following problem and can't figure out a solution. The provided java server application sends pcm audio data in chunks over a websocket connection. There are no headers etc. My task is to play these raw chunks of audio data in the browser without any delay. In the earlier version, I used audioContext.decodeAudioData because I was getting the full array with the 44 byte header at the beginning. Now there is no header so decodeAudioData cannot be used. I'll be very grateful for any suggestions and tips. Maybe I've to use some JS decoding library, any example or link will help me a lot.
Thanks.
1) Your requirement "play these raw chunks of audio data in the browser without any delay" is not possible. There is always some amount of time to send audio, receive it, and play it. Read about the term "latency." First you must get a realistic requirement. It might be 1 second or 50 milliseconds but you need to get something realistic.
2) Web sockets use tcp. TCP is designed for reliable communications, congestion control, etc. It is not design for fast low latency communications.
3) Give more information about your problem. Is you client and server communicating over the Internet or over a local Lan? This will hugely effect your performance and design.
4) The 44 byte header was a wav file header. It tells the type of data (sample rate, mono/stereo, bits per sample). You must know this information to be able to play the audio. IF you know the PCM type, you could insert it yourself and use your decoder as you did before. Otherwise, you need to construct an audio player manually.
Streaming audio over networks is not a trivial task.
I am trying to use ffmpeg, and have been doing a lot of experiment last 1 month.
I have not been able to get through. Is it really difficult to use FFmpeg?
My requirement is simple as below.
Can you please guide me if ffmpeg is suitable one or I have implement on my own (using codec libs available).
I have a webm file (having VP8 and OPUS frames)
I will read the encoded data and send it to remote guy
The remote guy will read the encoded data from socket
The remote guy will write it to a file (can we avoid decoding).
Then remote guy should be able to pay the file using ffplay or any player.
Now I will take a specific example.
Say I have a file small.webm, containing VP8 and OPUS frames.
I am reading only audio frames (OPUS) using av_read_frame api (Then checks stream index and filters audio frames only)
So now I have data buffer (encoded) as packet.data and encoded data buffer size as packet.size (Please correct me if wrong)
Here is my first doubt, everytime audio packet size is not same, why the difference. Sometimes packet size is as low as 54 bytes and sometimes it is 420 bytes. For OPUS will frame size vary from time to time?
Next say somehow extract a single frame (really do not know how to extract a single frame) from packet and send it to remote guy.
Now remote guy need to write the buffer to a file. To write the file we can use av_interleaved_write_frame or av_write_frame api. Both of them takes AVPacket as argument. Now I can have a AVPacket, set its data and size member. Then I can call av_write_frame api. But that does not work. Reason may be one should set other members in packet like ts, dts, pts etc. But I do not have such informations to set.
Can somebody help me to learn if FFmpeg is the right choice, or should I write a custom logic like parse a opus file and get frame by frame.
Now remote guy need to write the buffer to a file. To write the file
we can use av_interleaved_write_frame or av_write_frame api. Both of
them takes AVPacket as argument. Now I can have a AVPacket, set its
data and size member. Then I can call av_write_frame api. But that
does not work. Reason may be one should set other members in packet
like ts, dts, pts etc. But I do not have such informations to set.
Yes, you do. They were in the original packet you received from the demuxer in the sender. You need to serialize all information in this packet and set each value accordingly in the receiver.
I'm using FFMPEG for a C++ audio streaming and playback application.
I use the avformat_open_input function to open an URL to an external compressed audio file and then I step through to stream using av_read_frame. Then for each packet i directly decode the data and queue it in the audio buffer using OpenAL.
My question is if FFMPEG internally prebuffers compressed data from the external URL?
Does FFMPEG keep downloading data in the background even if I don't call av_read_frame?
Or is it my responsibility to maintain a intermediate buffer where I download as many packets as possible ahead of time to avoid starving the audio-playback?
If so, how much does it buffer/download internally? Can I configure this?
I have been looking through the documentation but have not found any information on this.
Thanks.
Update:
According to this thread http://ffmpeg.zeranoe.com/forum/viewtopic.php?f=15&t=376 libav should by default prebuffer about 5MB depending on AVFormatContext::max_analyze_duration. However I haven't noticed this behavior and it doesn't seem to change if I alter max_analyze_duration.
If I monitor the memory consumption of my process it doesn't increase after I call avformat_open_input and if I simulate slow-network, av_read_frame directly stops working like if it didn't have any packets buffered.
I'm searching for a way to analyse the content of internet radios. I want to write a ruby client that can get the current track, next track, band, bpm and other meta information from a stream (e.g. a radio on shoutcast).
Does anybody know how to do this? And how do I record that stream into a mp3 or aac file?
Maybe there is a library that can already do this, I haven't one so far.
regards
I'll answer both of your questions.
Metadata
What you are seeking isn't entirely possible. Information on the next track is not available (keep in mind not all stations are just playing songs from a playlist... many offer live content). Advanced metadata such as BPM is not available. All you get is something like this:
Some Band - Some Song
The format of {artist} - {song title} isn't always followed either.
With those caveats, you can get that metadata from a stream by connecting to the stream URL and requesting the metadata with the following request header:
Icy-MetaData: 1
That tells the server to send the metadata, which is interleaved into the stream. Every 8KB or so (specified by the server in a response header), you'll find a chunk of metadata to parse. I have written up a detailed answer on how to parse that here: Pulling Track Info From an Audio Stream Using PHP The prior question was language-specific, but you will find that my answer can be easily implemented in any language.
Saving Streams to Disk
Audio playing software is generally very resilient to errors. SHOUTcast servers are built on this principal, and are not knowledgeable about the data going through them. They just receive data from an encoder, and when the client requests the stream, they start sending that data at an arbitrary point.
You can use this to your advantage when saving stream data. It is possible to simply write the stream data as it comes in to a file. Most audio players will play them without problem. I have tested this with MP3 and AAC.
If you want a more conformant file, you will have to use a library or parse the stream yourself to split on the appropriate frames, and then handle bit reservoir issues in your code. This is a lot of work, and generally isn't worth doing unless you find your files have real compatibility problems.