Live audio streaming from IP camera/rtsp to website - ffmpeg

I have a Dahua IP camera with a microphone. I would like to get the audio stream playing on a website like a live radio.
I have a raspberry pi which I was planning to use with ffmpeg, but I haven't had much success in bridging the gap between that and my website to form an audio stream.
Can this be done via sftp/ftp with mp3 files and some fancy php/javascript to play like a live radio?
Do I need to use another service? (would like to minimise costs as much as possible!)
Thanks!
Peter

Browsers can't play RTSP source directly so you need a streaming server to get the RTSP stream and convert it to something suitable for HTML5 playback (HLS/DASH).
This can be done with Wowza SE (the free developer license could be used for 10 subscribers).
You can see how this works with the VideoNow.Live - IP Camera Restreaming feature and see how this is done by taking a peek at the open source code of the Live Streaming HTML5 RTSP WP plugin that powers that solution.

Related

Can I read an encoded stream from a URL with WebRTC

I'm trying to stream the video of my C++ 3D application (similar to streaming a game).
I have encoded an H.264 video stream with the ffmpeg library (i.e. internally to my application) and can push it to a local address, e.g. rtp://127.0.0.1:6666, which can be played by VLC or other player (locally).
I'm not particularly wedded to h.264 at this point, or rtp. I could send as srtp if that would help.
I'd like to use WebRTC to set up a connection across different machines, but can't see in the examples how to make use of this pre-existing stream - the video and audio examples are understandably focused on getting data from devices like connected web cams, or the display.
Is what I'm thinking feasible? I.e. ideally I'd just point webRTC at my rtp://127.0.0.1:6666 address and that would be the video stream source.
I am writing out an sdp file as well which can be read by VLC, could I use this in a similar way?
As noted in the comment below there is an example out there using go to weave some magic that enables an rtp stream to be shown in a browser via webRTC.
I am trying to find a more "standard" way to be able to set the source of a video track in webRTC to be the URL of an encoded stream. If there isn't one, that is valuable information to me too, as I can change tack and use a webrtc library to send frames directly.
Unfortunately FFMPEG doesn't support WebRTC output. It lacks support for ICE and DTLS-SRTP.
You will need to use a RTP -> WebRTC bridge. I wrote rtp-to-webrtc that can do this. You can do this with lots of different WebRTC clients/servers!
If you have a particular language/paradigm that you prefer happy to provide examples for those.

Stream microphone from client browser to remote server and pass audio in real time to ffmpeg to combine with a second video source

As a beginner at working with these kinds of real-time streaming services, I've spent hours trying to work out how this is possible, but can't seem to work out I'd precisely go about it.
I'm prototyping a personal basic web app that does the following:
In a web browser, the web application has a button that says 'Stream Microphone' - when pressed it streams the audio from the user's microphone (the user obviously has to consent to give permission to send their microphone audio) through to the server which I was presuming would be running node.js (no specific reason at this point, just thought this is how I'd go about doing it).
The server receives the audio close enough to real-time somehow (not sure how I'd do this).
I can then run ffmpeg on the command line and take the real-time audio coming in real-time and add it as the sound to a video file (let's just say I'm going to play testmovie.mp4) that I want to play.
I've looked at various solutions - such as maybe using WebRTC, RTP/RTSP, Piping audio into ffmpeg, Gstreamer, Kurento, Flashphoner and/or Wowza - but somehow they look overly complicated and usually seem to focus on video along with audio. I just need to work with audio.
As you've found there are numerous different options to receive the audio from a WebRTC enabled browser. The options from easiest to more difficult are probably:
Use a WebRTC enabled server such as Janus, Kurento, Jitsi (not sure about wowzer) etc. These servers tend to have plugin systems and one of them may already have the audio mixing capability you need.
If you're comfortable with node you could use the werift library to receive the WebRTC audio stream and then forward it to FFmpeg.
If you want to take full control over the WebRTC pipeline and potentially do the audio mixing as well you could use gstreamer. From what you've described it should be capable of doing the complete task without having to involve a separate FFmpeg process.
The way we did this is by creating a Wowza module in Java that would take the audio from the incoming stream, take the video from wherever you want it, and mix them together.
There's no reason to introduce a thrid party like ffmpeg in the mix.
There's even a sample from Wowza for this: https://github.com/WowzaMediaSystems/wse-plugin-avmix

I want to upload a camera video stream to Amazon S3 and download it to an Android phone. I'm completely new to this. How can I do this?

I'm really dumb and new to RTP/SIP. Is there a stack that's recommended for uploading video to the cloud from a camera attached to a microprocessor? What's the difference between all the things I'm seeing - MPEG DASH, Live555, ffmpeg, and so on...?
How does WhatsApp or Dropcam transmit live video?
Uploading a video on it's own is fairly trivial if the video has already been captured you will just need to set your app to have access to the local media store, list the files and then upload using any standard technology e.g. HTTP PUT, HTTP POST, FTP, S3 etc
If you want to process the video first you would be wise to first process it via a library such as ffmpeg which has been compiled for Android e.g. https://trac.ffmpeg.org/wiki/How%20to%20compile%20FFmpeg%20for%20Android
If you want to live stream the video hence your reference to RTP/SIP etc you will need to access the camera. You could start with something like the kickflip SDK as a basis and includes a bundled ffmpeg https://github.com/Kickflip/kickflip-android-sdk or libstreaming: https://github.com/fyhertz/libstreaming

suggestions for using protocol to stream images

The target is that a user sends a stream of images to a server. The server should forward those to a media server for showing as a live continuous video to clients.
following are the thought for implementing, kindly tell if they are ok.
Use a lightweight rtmp server to accept stream of images from a user (please suggest if this is even possible via rtmp and if it cab be easiy and efficiently done otherwise)
use ffmpeg to use the rtmp (or other) url as input and send those images to ffserver for streaming. (am also confused here, if ffserver is fed with images continuosly, can it show those images as video as long as the images are coming)
I think an easier solution would be to use an RTMP live ecoder that is capable of streaming a slideshow. This can be achieved by playing the slideshow using Windows Photo Viewer and then setting the live encoder to capture the Windows Photo Viewer output and stream it as live. Two RTMP encoders that should be able to do this are OBS (Open broadcaster software) and XSplit. Another (free) solution would be to use Adobe Flash Media Live Encoder in combination with a software called ManyCam. ManyCam can capture feed from video, images, etc and feed it to FMLE using a driver. Install ManyCam and create a slideshow of images in the playlist option. Then start FMLE and select the ManyCam driver. You can now stream the slideshow as live to an rtmp server.

Encoding and streaming continuous PNG output image files as live video streaming on Web browser

I have an Open GL application that renders a simulation animation and outputs several PNG image files per second and saves these files in a disk. I want to stream these image files as a video streaming over HTTP protocol so I can view the animation video from a web browser. I already have a robust socket server that handles connection from websocket and I can handle all the handshake and message encoding/decoding part. My server program and OpenGL application program are written in C++.
A couple of questions in mind:
what is the best way to stream this OpenGL animation output and view it from my web browser? The video image frames are dynamically (continuously) generated by the OpenGL application as PNG image files. The web browser should display the video corresponding to the Open GL display output (with minimum latency time).
How can I encode these PNG image files as a continuous (live) video programmatically using C / C++ (without me manually pushing the image files to a streaming server software, like Flash Media Live Encoder)? What video format should I produce?
Should I send/receive the animation data using a web-socket, or is there any other better ways? (like JQuery Ajax call for instead, I am just making this up, but please guide me through the correct way of implementing this). It is gonna be great if this live video streaming works across different browsers.
Does HTML5 video tag support live video streaming, or does it only work for a complete video file which exists at a particular URL/directory (not a live streaming)?
Is there any existing code samples (tutorial) for doing this live video streaming, where you have a C/C++/Java application producing some image frames, and have a web-browser consuming this output as a video streaming? I could barely find tutorials about this topic after spending few hours searching on Google.
You definitely want to stop outputting PNG files to disk and instead input the frames of image data into a video encoder. A good bet is to use libav/ffmpeg. Next, you will have to encapsulate the encoded video to a network friendly format. I would recommend x264 as an encoder and MPEG4 or MPEG2TS stream format.
To view the video in the web browser, you'll have to choose the streaming format. HLS in HTML5 is supported by Safari, but unfortunately not much else. For wide client support you will need to use a plugin such as flash or a media player.
The easiest way I can think of to do this is to use Wowza for doing a server-side restream. The GL program would stream MPEG2 TS to Wowza, and it would then prepare streams for HLS, RTMP(flash), RTSP, and Microsoft Smooth Streaming (Silverlight). Wowza costs about $1000. You could setup an RTMP stream using Red5, which is free. Or you could do RTSP serving with VLC, but RTSP clients are universally terrible.
Unfortunately, at this time, the level of standardization for web video is very poor, and the video tooling is rather cumbersome. It's a large undertaking, but you can get hacking with ffmpeg/libav. A proof of concept could be writing image frames in YUV420p format to a pipe that ffmpeg is listening to and choosing an output stream that you can read with an RTSP client such as VLC, Quicktime, or Windows Media Player.
Most live video is MPEG2 internally, wrapped up as RTMP (Flash) or HLS (Apple). There is probably a way to render off your OpenGL to frames and have them converted into MPEG2 as a live stream, but I don't know exactly how (maybe FFMPEG?). Once that is done you can push the stream through Flash Media Live Encoder (it's free) and stream it out to Flash clients directly using RTMP or push publish it into Wowza Media Server to package it for Flash, Http Live Streaming (Cupertino), Smooth Streaming for Silverlight.
Basically you can string together some COTS solutions into a pipeline and play on a standard player without handling the sockets and low level stuff yourself.

Resources