Is mp4 stream able with ffserver? - ffmpeg

Days I trying to stream mp4 file with ffserver.
I read many questions like these:
https://superuser.com/questions/563591/streaming-mp4-with-ffmpeg
Begin stream simple mp4 with ffserver
http://ffmpeg.gusari.org/viewtopic.php?f=12&t=1190
http://ffmpeg.org/pipermail/ffserver-user/2012-July/000204.html
HTML5 - How to stream large .mp4 files?
Finally I cant understand is mp4 stream able or not?
Is it a way to do this with ffserver?
Is there any sample?I read helps but they most about live stream but I
just want to stream a simple mp4 file.

Yes.
Streaming an mp4-file is very much possible with ffserver. However it might require some reading of the documentation:
https://ffmpeg.org/ffmpeg.html
https://ffmpeg.org/ffserver.html
The crucial part is the writing of the configuration file for ffserver (ffserver.conf). As far as I know, ffmpeg provides a list of sample-configurations:
Although they might be a bit outdated but if you try to run them, ffserver will tell you if something isn't as it should be :)
Edit:
(Since I only have a rep of 1, I can't post more than 2 links I removed the samples and displayed a rather simple one below)
To stream an mp4-file you may have to consider that ffserver might have problems to stream in the mp4-format. Still you can stream a mp4-file but in a different Format.
A very simple way would be like this:
<Stream streamTest.asf> #ASF as the streaming Format
File "/tmp/video1.mp4" #or wherever you store your Videos
</Stream>
The server converts the file on it's own, but if you really want to stream in mp4 you may have to take a closer look at "fragmented mp4".
To watch the stream use a player that can handle asf (I used VLC) and watch from URL:
ip-address:port/streamTest.asf
Summary:
It should say that I am also still learning the ways of ffserver, so there might be some mistakes :)
This is a short summary of the chapters from the ffserver-documentation to get started.
5.2 Global options
The options in this chapter specify your server settings. For example how many simultaneous requests should be handled. On what port do you want to stream etc... For people who are completely new to ffserver, most of the default-values should be sufficient.
5.3 Feed section
The feed section is one of the core parts of ffserver. Since a feed can serve multiple streams it might be useful to build that first.
Note: Feed is only necessary if you want to a) live stream b) stream files that are not stored on your server
c) mess around with the file before streaming
5.4 Stream section
Here you can actually build your own stream. There are a lot of variables that can be changed and I recommend to start slowly with adding/customizing options.
From this point on the documentation does a decent job. So now you know, what you need (again, I feel like the possibilities are countless but I'm still a beginner^^) and where to find the basics.
The structure of your ffserver.conf might (but doesn't have to) look like this:
#Options from 5.2
HTTPPort 8090
#...
#Feed (Options from 5.3)
<Feed feed1.ffm>
#...
</Feed>
#
#Stream (Options from 5.4)
<Stream stream1.asf>
Feed feed1.ffm
Format asf
NoAudio
#...
</Stream>
Since this is my first post, I hope it is not too chaotic :)

ffserver.conf:
HTTPPort 8090
HTTPBindAddress 0.0.0.0
RTSPPort 8091
MaxHTTPConnections 2000
MaxClients 1000
MaxBandwidth 1000
CustomLog -
<Stream 1.mp4>
File "/path/1.mp4"
Format rtp
</Stream>
Start:
ffserver -f ffserver.conf
Play:
ffplay rtsp://localhost:8091/1.mp4

Related

Convert m3u8 (HLS) to mpd (MPEG-DASH)

I have Live stream of HLS [https://82-80-192-30.vidnt.com/ipbc_IPBCchannel11LVMRepeat/definst/IPBCchannel11LVM_3.stream/playlist.m3u8] and I want to convert it to MPEG-DASH.
What is the best practice?
The stream is already h264 aac therefore I understand I do not need to reencode and I just need to transmux.
What should I use?
ffmpeg? mp4box?
Notes:
I used nginx-rtmp-module (https://github.com/ut0mt8/nginx-rtmp-module/) in order to create DASH from RTMP stream according to this tutorial: https://isrv.pw/html5-live-streaming-with-mpeg-dash
But nginx-rtmp-module can get as input just rtmp streams and it did not work for me with HLS stream.
I used ffmpeg in order to create dash from m3u8 as following:
ffmpeg -i https://82-80-192-30.vidnt.com/ipbc_IPBCchannel11LVMRepeat/_definst_/IPBCchannel11LVM_3.stream/playlist.m3u8 -strict -2 -min_seg_duration 2000 -window_size 5 -extra_window_size 5 -use_template 1 -use_timeline 1 -f dash out.mpd
But this is very limited. I can't control the segment duration.
The min_seg_duration parameter of ffmpeg does not work very well for me, and also it can set the minimum duration while I want to limit the maximum duration of each segment (the segment comes out with ~10 seconds, while I need it to be ~2-4 seconds as I'm playing live).
Firstly it is worth saying that if you can avoid doing this you will be saving yourself a whole lot of work!
Most devices and clients these days can play both HLS and DASH streams, so the usual approach is to add any extra functionality needed in your app or client.
If you do have to convert server side, then its worth being aware that while HLS streams typically used TS segments in the past, recently support for fragmented MP4 has become available within the HLS ecosystem.
If you have TS video streams then you will need to do a conversion along the lines you outline above with ffmpeg.
If you have fragmented MP4 then you should actually have the correct format already and may find you just have to create the manifest file so DASH can access the fragmented mp4 streams.
All the above assumes that your content is not encrypted or that you don't have to support encryption - if it is then you may not be able to convert the media, or you may have to also encrypt the media differently for some streams than others, as currently most deployed windows and chrome devices and browsers use a slightly different encryption approach (a different AES mode) than Apple devices.

Wrap a stream of raw H264 NALUs into a container like MP4

I have an application that sends raw h264 NALUs as generated from encoding on the fly using x264 x264_encoder_encode. I am getting them through plain TCP so I am not missing any frames.
I need to be able to decode such a stream in the client using Hardware Acceleration in Windows (DXVA2). I have been struggling to find a way to get this to work using FFMPEG. Perhaps it may be easier to try Media Foundation or DirectShow, but they won't take raw H264.
I either need to:
Change the code from the server application to give back an mp4 stream. I am not that experienced with x264. I was able to get raw H264 by calling x264_encoder_encode, by following the answer to this question: How does one encode a series of images into H264 using the x264 C API? How can I go from this to something that is wrapped in MP4 while still being able to stream it in realtime
I could at the receiver wrap it with mp4 headers and feed it into something that can play it using DXVA. I wouldn't know how to do this
I could find another way to accelerate it using DXVA with FFMPEG or something else that takes it in raw format.
An important restriction is that I need to be able to pre-process each decoded frame before displaying it. Any solution that does decoding and displaying in a single step would not work for me
I would be fine with either solution
I believe you should be able to use H.264 packets off the wire with Media Foundation. there's an example on page 298 of this book http://www.docstoc.com/docs/109589628/Developing-Microsoft-Media-Foundation-Applications# that use a HTTP stream with Media Foundation.
I'm only learning Media Foundation myself and am trying to do a similar thing to you, in my case I want to use H.264 payloads from an RTP packet, and from my understanding that will require a custom IMFSourceReader. Accessing the decoded frames should also be possible from what I've read since there seems to be complete flexibility in chaining components together into topologies.

What is the minimum amount of metadata is needed to stream only video using libx264 to encode at the server and libffmpeg to decode at the client?

I want to stream video (no audio) from a server to a client. I will encode the video using libx264 and decode it with ffmpeg. I plan to use fixed settings (at the very least they will be known in advance by both the client and the server). I was wondering if I can avoid wrapping the compressed video in a container format (like mp4 or mkv).
Right now I am able to encode my frames using x264_encoder_encode. I get a compressed frame back, and I can do that for every frame. What extra information (if anything at all) do I need to send to the client so that ffmpeg can decode the compressed frames, and more importantly how can I obtain it with libx264. I assume I may need to generate NAL information (x264_nal_encode?). Having an idea of what is the minimum necessary to get the video across, and how to put the pieces together would be really helpful.
I found out that the minimum amount of information are the NAL units from each frame, this will give me a raw h264 stream. If I were to write this to a file, I could watchit using VLC if adding a .h264
I can also open such a file using ffmpeg, but if I want to stream it, then it makes more sense to use RTSP, and a good open source library for that is Live555: http://www.live555.com/liveMedia/
In their FAQ they mention how to send the output from your encoder to live555, and there is source for both a client and a server. I have yet to finish coding this, but it seems like a reasonable solution

RTMP parsing with multiple Audio Video Session in the pcap

I have to write a RTMP parser which will handle the packets captured form a RTMP stream on wireshark and i will extract the data from the pcap.
I have gone through the specs ad i am able to understand the handshake process and also able to locate the media in TCP packets but i am confused in case of Multiple Audio/Video session which are interleaved within a single pcap, how we can handle that in the parsing so as make our parser able to parse multiple stream simultaneously. Any uniqueness will be very helpful for the simultaneous parsing of the different RTMP streams.
EDIT (after #Martin Redmond's answer): yeah that I am able to figure out but it seems like some FLV data is being streamed over the RTMp but that FLV header is missing and there seems to be different handshake and FLV data is streaming for same IP with different ports. So, i am not able to find if its the real FLV file or only header as if i extract only the header and the other data, i am not able to make a FLV file from it.
Any way to validate or extract the media from that RTMP stream???
The header information for each chunk of data lets you figure out which stream the chunk belongs to. It's not straight forward though. The header information gets compressed and the relevant info may have only been sent at the begining of the stream so you need have a context for each chunk.
The important part is the streamid. Video and audio from the same source will have the same streamid but will have different channel numbers and datatypes.
In the spec. the streamid is referred to as the message stream id (section 6.1.2.1) and is only sent with a type 0 header.

Save Live Video Stream To Local Storage

Problem:
I have to save live video streams data which come as an RTP packets from RTSP Server.
The data come in two formats : MPEG4 and h264.
I do not want to encode/decode input stream.
Just write to a file which is playable with proper codecs.
Any advice?
Best Wishes
History:
My Solutions and their problems:
Firt attempt: FFmpeg
I use FFmpeg libary to get audio and video rtp packets.
But in order to write packets i have to use av_write_frame :
which seems that decode /encode takes place.
Also, when i give output format as mp4 ( av_guess_format("mp4", NULL, NULL)
the output file is unplayable.
[ any way ffmpeg has bad doc. hard to find what is wrong]
Second attempth: DirectShow
Then i decide to use DirectShow. I found a RTSP Source Filter.
Then a Mux and File Writer.
Cretae Single graph:
RTSP Source --> MPEG MUX ---> File Writer
It worked but the problem is that the output file is not playable
if graph is not stoped. If something happens, graph crashed for example
the output file is not playable
Also i can able to write H264 data, but the video is completely unplayable.
The MP4 file format has an index that is required for correct playback, and the index can only be created once you've finished recording. So any solution using MP4 container files (and other indexed files) is going to suffer from the same problem. You need to stop the recording to finalise the file, or it will not be playable.
One solution that might help is to break the graph up into two parts, so that you can keep recording to a new file while stopping the current one. There's an example of this at www.gdcl.co.uk/gmfbridge.
If you try the GDCL MP4 multiplexor, and you are having problems with H264 streams, see the related question GDCL Mpeg-4 Multiplexor Problem

Resources