I'm trying to write a script to run an automated video call between two bots, where instead of streaming video from a camera, they stream a video from a file.
Any ideas on how I can do that?
Thanks in advance
Related
As a beginner at working with these kinds of real-time streaming services, I've spent hours trying to work out how this is possible, but can't seem to work out I'd precisely go about it.
I'm prototyping a personal basic web app that does the following:
In a web browser, the web application has a button that says 'Stream Microphone' - when pressed it streams the audio from the user's microphone (the user obviously has to consent to give permission to send their microphone audio) through to the server which I was presuming would be running node.js (no specific reason at this point, just thought this is how I'd go about doing it).
The server receives the audio close enough to real-time somehow (not sure how I'd do this).
I can then run ffmpeg on the command line and take the real-time audio coming in real-time and add it as the sound to a video file (let's just say I'm going to play testmovie.mp4) that I want to play.
I've looked at various solutions - such as maybe using WebRTC, RTP/RTSP, Piping audio into ffmpeg, Gstreamer, Kurento, Flashphoner and/or Wowza - but somehow they look overly complicated and usually seem to focus on video along with audio. I just need to work with audio.
As you've found there are numerous different options to receive the audio from a WebRTC enabled browser. The options from easiest to more difficult are probably:
Use a WebRTC enabled server such as Janus, Kurento, Jitsi (not sure about wowzer) etc. These servers tend to have plugin systems and one of them may already have the audio mixing capability you need.
If you're comfortable with node you could use the werift library to receive the WebRTC audio stream and then forward it to FFmpeg.
If you want to take full control over the WebRTC pipeline and potentially do the audio mixing as well you could use gstreamer. From what you've described it should be capable of doing the complete task without having to involve a separate FFmpeg process.
The way we did this is by creating a Wowza module in Java that would take the audio from the incoming stream, take the video from wherever you want it, and mix them together.
There's no reason to introduce a thrid party like ffmpeg in the mix.
There's even a sample from Wowza for this: https://github.com/WowzaMediaSystems/wse-plugin-avmix
Preface
I have read this two part tutorial (Part-1 and Part-2) by Steamroot on MPEG-DASH, and below is my understanding (please correct me if I am wrong):
The video needs to be encoded into multiple bit-rates using FFmpeg.
The encoded videos need to be transcoded (dashified) using MP4Box.
The dashified videos can be served using a web server.
Problem
I intend to live-stream an event and I need help to understand the following:
Can I club the FFmpeg and MP4Box commands into a single step? Maybe through a wrapper program so that I do not have to run them separately? Is there any other or better solution?
How do I send the dashified content to the web server? FTP? Would any vanilla web server do?
Lastly, a friend had hinted that I could also use GStreamer to achieve my objective. But, I could not find any good resource on the internet for the same. So, where (and how) does GStreamer fit in the above process?
What is the format you will be getting out of your camera for your live-event? There are a lot of solutions a lot more adapted for live streaming (the tutorial I wrote is for VOD streams only). You can check out simple solutions like Wowza Streaming Server, Nible streamer (free), etc, that take a RTMP stream and transform it into other formats (HLS, DASH, etc...).
Most of the livestreaming platforms can even do that for you (livestream.com, youtube, twitch, or even facebook now)
The dashified content will be requested as HTTP ressources by the browser or other players. In the case of a VoD stream, indeed you just need to make the dash segments available through a web-server. For live content, you need something smarter, that will encode, package the segments and make them available on the fly.
Gstreamer can transcode and transmux the original content, and can do it on the fly. You will be able to get different formats as outputs, like RTMP, HLS, and probably even mpeg-dash. Then you still need to make your content available via a webserver.
In conclusion, if you just want to transmit an occasional live event, it's probably a lot easier a platform that will ingest your RTMP stream and do all the complicated steps for you.
I'm really dumb and new to RTP/SIP. Is there a stack that's recommended for uploading video to the cloud from a camera attached to a microprocessor? What's the difference between all the things I'm seeing - MPEG DASH, Live555, ffmpeg, and so on...?
How does WhatsApp or Dropcam transmit live video?
Uploading a video on it's own is fairly trivial if the video has already been captured you will just need to set your app to have access to the local media store, list the files and then upload using any standard technology e.g. HTTP PUT, HTTP POST, FTP, S3 etc
If you want to process the video first you would be wise to first process it via a library such as ffmpeg which has been compiled for Android e.g. https://trac.ffmpeg.org/wiki/How%20to%20compile%20FFmpeg%20for%20Android
If you want to live stream the video hence your reference to RTP/SIP etc you will need to access the camera. You could start with something like the kickflip SDK as a basis and includes a bundled ffmpeg https://github.com/Kickflip/kickflip-android-sdk or libstreaming: https://github.com/fyhertz/libstreaming
The target is that a user sends a stream of images to a server. The server should forward those to a media server for showing as a live continuous video to clients.
following are the thought for implementing, kindly tell if they are ok.
Use a lightweight rtmp server to accept stream of images from a user (please suggest if this is even possible via rtmp and if it cab be easiy and efficiently done otherwise)
use ffmpeg to use the rtmp (or other) url as input and send those images to ffserver for streaming. (am also confused here, if ffserver is fed with images continuosly, can it show those images as video as long as the images are coming)
I think an easier solution would be to use an RTMP live ecoder that is capable of streaming a slideshow. This can be achieved by playing the slideshow using Windows Photo Viewer and then setting the live encoder to capture the Windows Photo Viewer output and stream it as live. Two RTMP encoders that should be able to do this are OBS (Open broadcaster software) and XSplit. Another (free) solution would be to use Adobe Flash Media Live Encoder in combination with a software called ManyCam. ManyCam can capture feed from video, images, etc and feed it to FMLE using a driver. Install ManyCam and create a slideshow of images in the playlist option. Then start FMLE and select the ManyCam driver. You can now stream the slideshow as live to an rtmp server.
idea is to create a video from images provided by a user and at the same time stream the generated video to other user demanding it.
kindly tell any efficient way to do this and which language out of PHP and C# .net will be suitable.
have looked into ffmpeg to take images and convert to video and save to server and then stream .. kindly tell if this the possibility or any other method for live streaming.
regards
UPATE
consider the following scenario as I understand:
get images from server and start combining them to form a video. at the same time, stream the video to the users requesting it.. for new coming clients, stream the previously generated video from the begining and keep on sending the new video which is being generated from images to the previous clients.
kindly tell if this is possible, if so then what can be the approach. Have read something about pipes but am completely new to ffmpeg and streaming in general.
Yes, this is possible with ffmpeg. Any language that is turing complete is suitable. They are many methods of live streaming including HLS, RTP, RTMP, etc.
If you need more detailed answers. Please ask more detailed questions.