Sending per frame metadata with H264 encoded frames - ffmpeg

We're looking for a way to send per frame metadata (for example an ID) with H264 encoded frames from a server to a client.
We're currently developing a remote rendering application, where both client and server side are actively involved.
The server renders a high quality image with all effects, lighting etc.
The client also has model-informations and renders a diffuse image that is used when the bandwidth is too low or the images have to be warped in order to avoid stuttering .
So far we're encoding the frames on the server side with ffmpeg and streaming them with live555 to the client, who receives an rtsp-stream and decodes the frames again using ffmpeg.
For our application, we now need to send per frame metadata.
We want the client to tell the server where the camera is right now.
Ideally we'd be able to send the client's view matrix to the server, render the corresponding frame and send it back to the client together with its view matrix. So when the client receives a frame, we need to know exactly at what camera position the frame was rendered.
Alternatively we could also tag each view matrix with an ID, send it to the server, render the frame and tag it with the same ID and send it back. In this case we'd have to assign the right matrix to the frame again on the client side.
After several attempts to realize the above intent with ffmpeg we came to the conclusion that ffmpeg does not provide the required functionality. ffmpeg only provides a fix, predefined set of fields for metadata, that either cannot store a matrix or can only be set for every key frame, which is not frequently enough for our purpose.
Now we're considering using live555. So far we have an on demand Server, witch gets a VideoSubsession with a H264VideoStreamDiscreteFramer to contain our own FramedSource class. In this class we load the encoded AVPacket (from ffmpeg) and send its data-buffer over the network. Now we need a way to send some kind of metadata with every frame to the client.
Do you have any ideas how to solve this metadata problem with live555 oder another library?
Thanks for your help!

It seems this question was answered in the comments:
pipe the output of ffmpeg through a custom tool that embedded the data
in the 264 elementary stream via an SEI
Someone also gave the following answer, which was deleted a few years ago for dubious reasons (it is brief but does seem to contain sufficient information):
You can do so using MPEG-4. See details for MPEG-4 Part 14 for
details.

Related

PCM audio streaming over websocket

I've been struggling with the following problem and can't figure out a solution. The provided java server application sends pcm audio data in chunks over a websocket connection. There are no headers etc. My task is to play these raw chunks of audio data in the browser without any delay. In the earlier version, I used audioContext.decodeAudioData because I was getting the full array with the 44 byte header at the beginning. Now there is no header so decodeAudioData cannot be used. I'll be very grateful for any suggestions and tips. Maybe I've to use some JS decoding library, any example or link will help me a lot.
Thanks.
1) Your requirement "play these raw chunks of audio data in the browser without any delay" is not possible. There is always some amount of time to send audio, receive it, and play it. Read about the term "latency." First you must get a realistic requirement. It might be 1 second or 50 milliseconds but you need to get something realistic.
2) Web sockets use tcp. TCP is designed for reliable communications, congestion control, etc. It is not design for fast low latency communications.
3) Give more information about your problem. Is you client and server communicating over the Internet or over a local Lan? This will hugely effect your performance and design.
4) The 44 byte header was a wav file header. It tells the type of data (sample rate, mono/stereo, bits per sample). You must know this information to be able to play the audio. IF you know the PCM type, you could insert it yourself and use your decoder as you did before. Otherwise, you need to construct an audio player manually.
Streaming audio over networks is not a trivial task.

Analyse audio stream using Ruby

I'm searching for a way to analyse the content of internet radios. I want to write a ruby client that can get the current track, next track, band, bpm and other meta information from a stream (e.g. a radio on shoutcast).
Does anybody know how to do this? And how do I record that stream into a mp3 or aac file?
Maybe there is a library that can already do this, I haven't one so far.
regards
I'll answer both of your questions.
Metadata
What you are seeking isn't entirely possible. Information on the next track is not available (keep in mind not all stations are just playing songs from a playlist... many offer live content). Advanced metadata such as BPM is not available. All you get is something like this:
Some Band - Some Song
The format of {artist} - {song title} isn't always followed either.
With those caveats, you can get that metadata from a stream by connecting to the stream URL and requesting the metadata with the following request header:
Icy-MetaData: 1
That tells the server to send the metadata, which is interleaved into the stream. Every 8KB or so (specified by the server in a response header), you'll find a chunk of metadata to parse. I have written up a detailed answer on how to parse that here: Pulling Track Info From an Audio Stream Using PHP The prior question was language-specific, but you will find that my answer can be easily implemented in any language.
Saving Streams to Disk
Audio playing software is generally very resilient to errors. SHOUTcast servers are built on this principal, and are not knowledgeable about the data going through them. They just receive data from an encoder, and when the client requests the stream, they start sending that data at an arbitrary point.
You can use this to your advantage when saving stream data. It is possible to simply write the stream data as it comes in to a file. Most audio players will play them without problem. I have tested this with MP3 and AAC.
If you want a more conformant file, you will have to use a library or parse the stream yourself to split on the appropriate frames, and then handle bit reservoir issues in your code. This is a lot of work, and generally isn't worth doing unless you find your files have real compatibility problems.

Encode WebCam frames with H.264 on .NET

What i want to do is the following procedure:
Get a frame from the Webcam.
Encode it with an H264 encoder.
Create a packet with that frame with my own "protocol" to send it via UDP.
Receive it and decode it...
It would be a live streaming.
Well i just need help with the Second step.
Im retrieving camera images with AForge Framework.
I dont want to write frames to files and then decode them, that would be very slow i guess.
I would like to handle encoded frames in memory and then create the packets to be sent.
I need to use an open source encoder. Already tryed with x264 following this example
How does one encode a series of images into H264 using the x264 C API?
but seems it only works on Linux, or at least thats what i thought after i saw like 50 errors when trying to compile the example with visual c++ 2010.
I have to make clear that i already did a lot of research (1 week reading) before writing this but couldnt find a (simple) way to do it.
I know there is the RTMP protocol, but the video stream will always be seen by one peroson at a(/the?) time and RTMP is more oriented to stream to many people. Also i already streamed with an adobe flash application i made but was too laggy ¬¬.
Also would like you to give me an advice about if its ok to send frames one by one or if it would be better to send more of them within each packet.
I hope that at least someone could point me on(/at?) the right direction.
My english is not good maybe blah blah apologies. :P
PS: doesnt has to be in .NET, it can be in any language as long as it works on Windows.
Many many many many thanks in advance.
You could try your approach using Microsoft's DirectShow technology. There is an opensource x264 wrapper available for download at Monogram.
If you download the filter, you need to register it with the OS using regsvr32. I would suggest doing some quick testing to find out if this approach is feasible, use the GraphEdit tool to connect your webcam to the encoder and have a look at the configuration options.
Also would like you to give me an advice about if its ok to send frames one by one or if it would be better to send more of them within each packet.
This really depends on the required latency: the more frames you package, the less header overhead, but the more latency since you have to wait for multiple frames to be encoded before you can send them. For live streaming the latency should be kept to a minimum and the typical protocols used are RTP/UDP. This implies that your maximum packet size is limited to the MTU of the network often requiring IDR frames to be fragmented and sent in multiple packets.
My advice would be to not worry about sending more frames in one packet until/unless you have a reason to. This is more often necessary with audio streaming since the header size (e.g. IP + UDP + RTP) is considered big in relation to the audio payload.

Can I get raw video frames from DirectShow without playback

I'm working on a media player using Media foundation. I want to support VOB files playback. However, media foundation currently does not support the VOB container. Therefore I wish to use DirectShow for the same.
My idea here is not to take an alternate path using a DirectsShow graph, but just grab a video frame and pass it to the same pipeline in media foundation. In media foundation, I have an 'IMFSourceReader' which simply reads frames from the video file. Is there a direct show equivalent, which just gives me the frames without needing to create a graph, start playback cycle, and then trying to extract frames from the renders pin? (To be more clear, does DirectsShow support an architecture wherein it could give me raw frames without actually having to play the video?)
I've read about ISampleGrabber but its deprecated and I think it won't fit my architecture. I've not worked on DirectShow before.
Thanks,
Mots
You have to build a graph and accept frames from the respective parser/demultiplexer filter which will read container and deliver individual frames on its output.
The playback does not have to be realtime, nor you need to fake painting those video frames somewhere. Once you get the data you need in Sample Grabber filter, or a customer filter, you can terminate pipeline with a Null Renderer. That is, you can arrange getting frames you need in a more or less convenient way.
You can use Monogram frame grabber filter to connect the VOB DS filter's output - it works great. See the comments there for how to connect the output to external application.

Playing a stream of video data using QTKit on Mac OS X

I've been playing with QTKit for a couple of days and I'm successfully able to record video data to a file from the camera using a QTCaptureSession and QTCaptureDeviceInput etc.
However what I want to do is send the data to another location, either over the network or to a different object within the same app (it doesn't matter) and then play the video data as if it were a stream.
I have a QTCaptureMovieFileOutput and I am passing nil as the file URL so that it doesn't actually record the data to the file (I'm only interested in the data contained in the QTSampleBuffer that is available via the delegate callback).
I have set a QTCompressionOptions object on the output specifying H264 Video and High Quality AAC Audio compression.
Each time I receive a call back, I append the data from the sample buffer into an NSMutableData object I have as an instance variable.
The problem I have is that no 'player' object in the QTKit framework seems capable of playing a 'stream' of video data. Am I correct in this assumption?
I tried creating a QTMovie object (to play in a QTMovieView) using my data instance variable but I get the error that the data is not a movie.
Am I approaching this issue from the wrong angle?
Previously I was using a QTCapturePreviewOutput which passes CVImageBufferRefs for each video frame. I was converting these frames into NSImages to display on a view.
While this gave the impression of streaming, it was slow and processor hungry.
How have other people conquered the streaming video problem?
Seems to me like you'd need to make a GL texture and then load the data into it on a per-frame basis. Everything about QTMovie seems to be based on pre-existing files, as far as my little mind can tell.

Resources