Today i tried using ffmpeg on my debian 8.3 server to livestream 24/7 hours. However it doesnt work.
#! /bin/bash
INRES="1280x1024" # input resolution (The resolution of the program you want to stream!)
OUTRES="1024x790" # Output resolution (The resolution you want your stream to be at)
FPS="60" # target FPS
QUAL="ultrafast"
# one of the many FFMPEG presets that can be used
# If you have low bandwidth, put the qual preset on 'ultrafast' (upload bandwidth)
# If you have medium bandwitch put it on normal to medium or fast
STREAM_KEY="hidden" # this is your streamkey
ffmpeg -f "file.avi" -s "$INRES" -r "$FPS" -i :0.0 \
-f alsa -ac 2 -i pulse -vcodec libx264 -s "$OUTRES" \
-acodec libmp3lame -ab 128k -ar 44100 -threads 0 \
-f flv "rtmp://a.rtmp.youtube.com/live2"
it gives me the output
Unknown input format: 'file.avi'
This
ffmpeg -f "file.avi" ...
should be
ffmpeg -i "file.avi" ...
Related
I am now playing with raspivid on Raspbian and a raspberry pi equipped with a PinoIR camera module.
I am almost done with the setup and have found a pre-compiled version of FFmpeg 3.1.1 to experiment a streaming to a youtube live stream by means of the command:
raspivid -o - -t 0 -vf -hf -fps 30 -b 6000000 | ffmpeg -ar 44100 -ac 2 -acodec pcm_s16le -f s16le -ac 2 -i /dev/zero -f h264 -i - -vcodec copy -acodec aac -ab 128k -g 50 -strict experimental -f flv rtmp://a.rtmp.youtube.com/live2/<SESSION>
Is there any parameter I can use to also stream to a local machine (e.g. through a VLC client reading the stream) ?
I have achieved to do it in another bash with cvlc:
cvlc -vvv stream:///dev/stdin --sout '#rtp{sdp=rtsp://:8554/}':demux=h264
but executing both bashes at the same time is not possible as the input camera is locked by the system.
So, I looked in the ffmpeg documentation and found an interesting thread on multiple outputs : https://trac.ffmpeg.org/wiki/Creating%20multiple%20outputs.
Then, I have tried the piped processes and another ffmpeg call to the inital command:
raspivid -o - -t 0 -vf -hf -fps 30 -b 6000000 | ffmpeg -ar 44100 -ac 2 -acodec pcm_s16le -f s16le -ac 2 -i /dev/zero -f h264 -i - -vcodec copy -acodec aac -ab 128k -g 50 -strict experimental -f flv rtmp://a.rtmp.youtube.com/live2/<SESSION> | ffmpeg -f h264 -i - -vcodec copy -f rtsp -rtsp_transport tcp rtsp://localhost:8888/live.sdp
It appears to have a syntax error, and maybe it is not the best way of achieving this. Could you please put me on the right track ?
Thanks and have a nice night !
Nicolas
I'm trying to get a Blackmagic Ultrastudio Mini Recorder to stream via avconv to HLS. To test, it's hooked up to an AppleTV and this is the command I'm using:
./bmdcapture -m 14 -C 0 -F nut -f pipe:1 | avconv -vsync passthrough -y -i - -vcodec copy -pix_fmt yuyv422 -strict experimental -f hls -hls_list_size 999 +live -strict experimental out.m3u8
However, the colors are all messed up; suggesting the color format is set incorrectly. The input format is 1280x720 # 59.94 FPS (which is correct) and I've set the format to yuyv422 (though nothing else I've set this to has fixed the error).
Got it!
The Mini Recorder shoots at 10 bits rather than 8 (which I assumed considering Adobe's live encoder said it would be 8).
Here is the fixed code:
./bmdcapture -m 14 -p yuv10 -C 0 -F nut -f pipe:1 | avconv -vsync passthrough -y -i - -vcodec copy -pix_fmt uyvy422 -strict experimental -f hls -hls_list_size 999 +live -strict experimental out.m3u8
What format/syntax is needed for ffmpeg to output the same input to several different "output" files? For instance different formats/different bitrates? Does it support parallelism on the output?
The ffmpeg documentation has been updated with lots more information about this and options depend on the version of ffmpeg you use: http://ffmpeg.org/trac/ffmpeg/wiki/Creating%20multiple%20outputs
From FFMpeg documentation, FFmpeg writes to an arbitrary number of output "files".
Just make sure each output file (or stream), is preceded by the proper output options.
I use
ffmpeg -f lavfi -re -i 'life=s=300x200:mold=10:r=25:ratio=0.1:death_color=#C83232:life_color=#00ff00,scale=1200:800:flags=16' \
-f lavfi -re -i sine=frequency=1000:sample_rate=44100 -pix_fmt yuv420p \
-c:v libx264 -b:v 1000k -g 30 -keyint_min 60 -profile:v baseline -preset veryfast -c:a aac -b:a 96k \
-f flv "rtmp://yourname.com:1935/live/stream1" \
-f flv "rtmp://yourname.com:1935/live/stream2" \
-f flv "rtmp://yourname.com:1935/live/stream3" \
Is there any reason you can't just run more than one instance of ffmpeg? I've have great results with that ...
Generally what I've done is run ffmpeg once on the source file to get it to sort of the base standard (say a higher quality h.264 mp4 file) this will make sure your other jobs will run more quickly if your source file has any issues since they'll be cleaned up in this first pass
Then use that new source/input file to run x number of ffmpeg jobs, for example in bash ...
Where you see "..." would be where you'd put all your encoding options.
# create 'base' file
ffmpeg -loglevel error -er 4 -i $INPUT_FILE ... INPUT.mp4 >> $LOG_FILE 2>&1
# the command above will run and then move to start 3 background jobs
# text output will be sent to a log file
echo "base file done!"
# note & at the end to send job to the background
ffmpeg ... -i INPUT.mp4 ... FILENAME1.mp4 ... >/dev/null 2>&1 &
ffmpeg ... -i INPUT.mp4 ... FILENAME2.mp4 ... >/dev/null 2>&1 &
ffmpeg ... -i INPUT.mp4 ... FILENAME3.mp4 ... >/dev/null 2>&1 &
# wait until you have no more background jobs running
wait > 0
echo "done!"
Each of the background jobs will run in parallel and will be (essentially) balanced over your cpus, so you can maximize each core.
based on http://sonnati.wordpress.com/2011/08/30/ffmpeg-–-the-swiss-army-knife-of-internet-streaming-–-part-iv/ and http://ffmpeg-users.933282.n4.nabble.com/Multiple-output-files-td2076623.html
ffmpeg -re -i rtmp://server/live/high_FMLE_stream -acodec copy -vcodec x264lib -s 640×360 -b 500k -vpre medium -vpre baseline rtmp://server/live/baseline_500k -acodec copy -vcodec x264lib -s 480×272 -b 300k -vpre medium -vpre baseline rtmp://server/live/baseline_300k -acodec copy -vcodec x264lib -s 320×200 -b 150k -vpre medium -vpre baseline rtmp://server/live/baseline_150k -acodec libfaac -vn -ab 48k rtmp://server/live/audio_only_AAC_48k
Or you could pipe the output to a "tee" and send it to "X" other processes to actually do the encoding, like
ffmpeg -i input - | tee ...
which might save cpu since it might enable more output parallelism, which is apparently otherwise unavailable
see http://ffmpeg.org/trac/ffmpeg/wiki/Creating%20multiple%20outputs and here
I have done like that
ffmpeg -re -i nameoffile.mp4 -vcodec libx264 -c:a aac -b:a 160k -ar 44100 -strict -2 -f flv \
-f flv rtmp://rtmp.1.com/code \
-f flv rtmp://rtmp.2.com/code \
-f flv rtmp://rtmp.3.com/code \
-f flv rtmp://rtmp.4.com/code \
-f flv rtmp://rtmp.5.com/code \
but is not working completely well as i was expecting and having on restream with nginx
I have a video that I am converting through an automated system that is 7mb, it is basically an mp3 with an image.
Now when its converted it magically becomes 17mb. My guess is its looping through the images instead of compressing them. The video was downloaded from youtube.
Here is my command that I'm converting it with:
/usr/local/bin/ffmpeg -i '/home/site/www-video/Upload/Temp/9d40b683eb2e8e8a036d64c741d04e01.flv' -pass 1 -vcodec libx264 -vpre fast_firstpass -s 480x360 -g 12 -fs 524288000 -vsync 2 -threads 0 -f rawvideo -an -y /dev/null
&&
/usr/local/bin/ffmpeg -i '/home/site/www-video/Upload/Temp/9d40b683eb2e8e8a036d64c741d04e01.flv' -pass 2 -acodec copy -vcodec libx264 -vpre fast -b 512k -g 12 -s 480x360 -fs 524288000 -vsync 2 -threads 0 -y /home/site/www-video/Upload/Temp/15616/video.flv
As you can see I'm converting it to the same format and it magically gains 10mb
I fixed the problem, ffmpeg was increasing the bitrate and I had to write some code in php to get the video's bitrate if it was lower than 512k and set the output bitrate to it.
I use in command line for ffmpeg
-i Input.flv -vcodec h263 -b 256k -r 15 -s 320x240 -acodec libopencore_amrnb \
-ab 7.4k -ar 8000 -ac 1 -f 3gp Output.3gp
The result is audio-only, without video. But when the 176x144, it works great.
What's wrong in using of frame size (320x240)? And what is the solution?
Are you sure there is no video in the resulting Output.3gp file? Is it possible that the end device does not support 320x240?
It would help significantly if you were to include the entire FFmpeg output in your question.