How to convert RTMP to mp4 On The Fly with FFMPEG? - ffmpeg

i want to ask about live streaming, i have wowza server and used rtmp protocol in web client, the question is how to compatible in all device like desktop and mobile, i used ffmpeg, but how to change rtmp to mp4 on the fly? what type command in ffmpeg? i want to used protocol http not rtmp or rtsp, thanks.
Regards,
Panji

If you want live http streaming (HLS), then you should use wowza's cupertinostreamingpacketizer in the <LiveStreamPacketizers> list, and point non-rtmp clients at http://your-server:1935/live/yourstream/playlist.m3u8. No need for ffmpeg. Often the HLS packetizer is enabled by default on wowza, so just try going to that URL in an html5+h264 capable browser. Bear in mind your encoding software must encode as h264, not the v6 codec.
Your HLS stream will be around 30s - 1 minute behind the rtmp stream. If you want the stream to be in sync across devices, consider using HDS (sanjosestreamingpacketizer) instead of rtmp, and pointing your HDS-capable flash player at http://your-server:1935/live/yourstream/manifest.f4m.
If you want to record a live stream as an mp4 for later playback, you can use wowza's in-built recording API - see http://www.wowza.com/forums/content.php?123#userinterface.
Alternatively, you can use rtmpdump (generally available as a package on most unix systems) to grab the rtmp stream, then ffmpeg to convert once it's down:
rtmpdump -q --rtmp "rtmp://your-server:1935/live/" --playpath yourstream -o yourstream.flv --live
ffmpeg -i yourstream.flv -vcodec copy -acodec copy yourstream.mp4 </dev/null
ffmpeg -i yourstream.mp4 -vframes 1 yourstream.jpg </dev/null
The first ffmpeg command converts to an mp4, the second grabs the first frame and saves as a .jpg so you can use it as your poster frame.

Related

What is the process of converting an image stream into an H.264 live video stream?

How does live streaming, using any protocol/codec, work from end-to-end?
I have been searching google, youtube, FFMPEG documentation, OBS source code, stack overflow, but still cannot understand how live video streaming works, from videos. So I am trying to capture desktop screenshots and convert that to a live video stream that is H.264 encoded.
What I know how to do:
Capture screenshot images using Graphics.CopyFromScreen with C# on some loop
Encode the bits and save images as JPEG files
Send JPEG image in base64 one-at-a-time and write it on a named pipe server
Read image buffer from named pipe on a nodejs server
Send base64 jpeg image over socket to client to display on a web page, each frame
What I want to be able to do:
Encode, I assume chunks, images into some H.264 format for live streaming with one of the protocols (RTMP, RTSP, HLS, DASH)
Push the encoded video chunks onto a server (such as an RTMP server), continuously (I assume ever 1-2 seconds?)
Access server from a client to stream and display live video
I've tried using FFMPEG to continuously send .mp4 files onto an RTMP server but this doesn't seem to work as it closes the connection after each video. I have also looked into using ffmpeg concat lists but this just combines videos, it can't append videos read by a live stream to my understanding and probably wasn't made for that.
So my best lead is from this stackoverflow answer which suggests:
Encode in FLV container, set duration to be arbitrarily long (according to the answer, youtube used this method)
Encode the stream into RTMP, using ffmpeg or other opensource rtmp muxers
Convert stream into HLS
How is this encoding and converting done? Can this all be done with ffmpeg commands?

How can I determine the resolution of incoming RTMP streams with ffmpeg?

I'm using ffmpeg to transcode RTMP from my own RTMP server into HLS ready H.264. At the moment, I'm executing a command of the following form
ffmpeg -i rtmp://<ip>:<port> <options for 480p> <options for 720p30> <options for 720p60> <options for 1080p>
This is causing me to attempt to transcode lower resolutions to higher resolutions.
The RTMP server I'm using is nginx with RTMP module
Is there a way I can determine the source resolution, so that I only transcode into resolutions smaller than the source one?
Thanks to #szatmary's comment I have found the following solution:
I can use the command line tool ffprobe to get information about a stream. Here is the documentation
It says here that
If a url is specified in input, ffprobe will try to open and probe the url content. If the url cannot be opened or recognized as a multimedia file, a positive exit code is returned.
ffprobe can be configured to use different writers and writing options so that it can return results in a range of formats.

how ffmpeg internally works to create clip from remote videos

We needed to create clips from the remote video by providing the time duration. This is the command we are using
ffmpeg -i {{remote_video}} -ss {{start_time}} -flush_packets 1 -codec copy -t {{duration}} -y {{output_file}}
What We are unable to figure out is how actually FFmpeg does this. It does not download the entire video & still is able to generate clip for remote video.
Looked into documentation but found none.
I think it will be a combination of container format and what "protocol" that is used. The container needs to support some kind of seeking and then the protocol used (file, http, etc) needs to support seek. For example the ffmpeg http protocol implementation can do seeks using the Range-header if the remote server supports is.
Have look at https://github.com/FFmpeg/FFmpeg/blob/master/libavformat/http.c if you want to see how it works for http (search for "seek").

stream webcam using ffmpeg and live555

I am new to live555.
I want to stream my webcam from a windows 7 (64-bit) machine behind home LAN using ffmpeg as the encoder to a live555 server running on a Debian 64-bit linux machine in a data center over the WAN. I want to send a H.264 RTP/UDP stream from ffmpeg and the "testOnDemandRTSPServer" should send out RTSP streams to clients that connect to it.
I am using the following ffmpeg command which sends UDP data to port 1234, IP address AA.BB.CC.DD
.\ffmpeg.exe -f dshow -i video="Webcam C170":audio="Microphone (3- Webcam C170)" -an
-vcodec libx264 -f mpegts udp://AA.BB.CC.DD:1234
On the linux server I am running the testOnDemandRTSPServer on port 5555 which expects raw UDP data from from AA:BB:CC:DD:1234. I try to open the rtsp stream in VLC using rtsp://AA.BB.CC.DD:5555/mpeg2TransportStreamFromUDPSourceTest
But I get nothing in VLC. What am I doing wrong? How can I fix it?
From what I remember, it was non-trivial to write a DeviceSource class, the problem you're describing is definitely something that's discussed quite frequently on the live555 mailing list - you need to get yourself approved to the list a.s.a.p if you want to do anything related to rtsp development.
The problem you seem to be having is related to the fact that some video formats are written with streaming in mind, and the rtsp server can easily stream certain formats because they contain "sync bytes" and other 'markers' which it can use to determine where frame boundaries end. The simplest solution you could use is to get your hands on the SDK for the camera, and use that to request data from the camera. There are many different libraries and toolkits that let you access data from the camera - one of which would be the DirectX SDK. Once you have the camera data, you would need to encode it into a streamable format, you might be able to get the raw camera frames using DirectX, then convert that to mp4 / h264 frame data using ffmpeg (libavcodec, libavformat).
Once you have your encoded frame data, you feed that into your DeviceSource class, and it will take care of streaming the data for you. I wish I had code on hand, but I was bound by NDA to not remove code from the premises, although the general algorithm is documented on the live555 website, so I am able to explain it here.
I hope you have a bit more luck with this. If you get stuck, then remember to add code to your question. Right now the only thing that's stopping your original plan from working (stream file to VLC) is the file format you chose to stream.
One thing you can try is to increase the logging verbosity level of VLC to 2: VLC expects in-band parameter sets in which case it will print a debug message that it is waiting for parameter sets on the messages window. Just having the parameter sets in the SDP of the RTSP DESCRIBE is not sufficient. IIRC you can configure x264 to output parameter sets periodically or at least with every IDR frame.
Other things you can try:
You can test the stream with openRTSP before using VLC. If you use the openRTSP -d 5 -Q rtsp://xxx.xxx.xxx.xxx:5555/mpeg2TransportStreamFromUDPSourceTest options openRTSP will print quality statistics after streaming for 5 seconds. Then you will be able to verify that the testOnDemandRTSPServer is indeed relaying the stream, and that there is not a problem between the ffmpeg application and the testOnDemandRTSPServer.
Have you tried a different stream? Also, I had a similar problem due to issues with my firewall, you might want to make sure you can actually stream data through those ports.
If you are missing a Sync Byte, it's probably a stream issue - try using a different data source and see if that helps, try an .avi file or an .mp4 file, usually .mp4 files are easy to stream. If the streaming works with the .mp4 file, and not with your mpegts file, then it's a problem in your file - ffmpeg is trying to figure out where each "frame" or "frame set" of data ends so that it can try to stream discrete chunks.
It's been over 2 years since I last worked with this stuff, so let me know if you get anywhere.

ffmpeg settings or alternatives to ffmpeg on raspberry pi for video streaming

I have a Raspberry Pi (model B) running raspbian wheezy on a 16gb SD card. I also have a 32gb flash storage attached on the usb. I'm trying to stream a video (h264 encoded mp4 file 1280x720) over the ethernet from that flash storage.
I'm using ffmpeg+ffserver. Here is ffserver.conf (relevant parts):
...
MaxBandwidth 10000
<Feed feed1.ffm>
...
FileMaxSize 100M
ACL allow 127.0.0.1
</Feed>
...
<Stream test.flv>
Feed feed1.ffm
Format flv
VideoSize 288x176 #made small just for testing
NoAudio
</Stream>
....
I start the ffserver, then call ffmpeg with this command:
ffmpeg -re -an -i /mnt/u32/main.mp4 -r 25 -bit_rate 300k http://localhost:8090/feed1.ffm
And I'm getting fps 3-5 at most. Naturally when I try to view the stream on another computer it's very choppy and virtually unusable.
Am I missing some settings? Or perhaps there is another streaming solution that leverages the GPU instead of just the CPU as ffmpeg does? I'm even open to suggestions about other boards (e.g. a pandaboard? or clustering several RPi's?) Also, I'm flexible about the output format.
Try it with the rtmp-nginx module. I succeed to stream it pretty well with ffmpeg. The appropiate codec to use for stream video is h264. I made a python script that runs ffmpeg and streams it with nginx, maybe it will help you.
It may be also possible to use the hardware encoding with ffmpeg now.

Resources