I want to loop video until the sound stops, everything works good but it takes too much time.
if my audio file length is 4 minutes then it takes approx of 4 minutes & the size is also too much, here is my command
String[] cmd = new String[]{"-i",audioFile.getAbsolutePath(),"-filter_complex","movie="+videoFile.getAbsolutePath()+":loop=0,setpts=N/(FRAME_RATE*TB)","-c","copy","-y",createdFile.getAbsolutePath()};
We see many "encoding with ffmpeg on Android is too slow" questions here. Assuming you're encoding with libx264 add -preset ultrafast and -crf 26 or whatever value looks acceptable to you (see FFmpeg Wiki: H.264).
Not much else you can do if you want to use software based encoding via ffmpeg & x264. FFmpeg does not yet support MediaCodec hardware encoding as far as I know. It does support MediaCodec video decoding of H.264, HEVC, MPEG-2, MPEG-4, VP8, and VP9, but decoding is not the bottleneck here.
You can try to get x264 to use your CPU capabilities, such as avoiding compiling x264 with --disable-asm, but I don't know if that is possible with your hardware.
Note that stream copying (re-muxing) with -c copy is not possible when filtering the same stream, so change it to the more specific -c:a copy since you are not filtering the audio.
try this query with addition of " "-preset", "ultrafast" into query , ..... String[] cmd = new String[]{"-i",audioFile.getAbsolutePath(),"-preset", "ultrafast","-filter_complex","movie="+videoFile.getAbsolutePath()+":loop=0,setpts=N/(FRAME_RATE*TB)","-c","copy","-y",createdFile.getAbsolutePath()};
Related
I can download http://www.w6rz.net/adv8dvbt23.ts.
And there are many samples for dvbt sample ts files.
But, I want to convert my video file to TS file for dvbt.
First, I checked on google, but I cannot find any answer.
I think, this does not make sense, or, the way of thinking may have been wrong.
FFmpeg can used for this?
but, there is no any parmameter for Transmit mode, QAM / 64QAB, guard interval.
FFmpeg can used for this? but, there is no any parmameter for Transmit mode, QAM / 64QAB, guard interval.
As I explained already:
ffmpeg doesn't know anything about RF things like Constellation type; it is just a tool to transcode between different video formats. .ts is for "transport stream", and it's the video container format that DVB uses. The GNU Radio transmit flowgraphs on the other hand know nothing about video things – all they do is take the bits from a file. So that file needs to be in a format that the receiver would understand, and that's why I instructed you to use FFMPEG with the parameters you need. Since I don't know which bitrate you're planning on transmitting, I can't help you with how to use ffmpeg
So, you need to generate video data that your DVB-T receiver understands, but more importantly even, you need to put them in a container that ensures constant bitrate.
As pointed out in a different comment to your ham.stackexchange.com question about the topic, your prime source of examples would be GNU Radio's own gr-dtv module; when you look into gnuradio/gr-dtv/examples/README.dvbt, you'll find a link to https://github.com/drmpeg/dtv-utils , W6RZ's own tooling :)
There you'll find the tools necessary to calculate the exact stream bitrate you need your MPEG transport stream to have. Remember, a DVB-T transmitter has to transmit at a constant bits per second, so your video container must be constant-bitrate. That's why a transport stream pads the video data to achieve constant rate.
Then, you'll use ffmpeg to transcode your video and put into the transport stream container:
ffmpeg -re -i inputvideo.mpeg \
-vcodec mpeg2video \
-s 720x576 #resolution; this is a good choice, since most TVs will deal with it \
-r 25 #frames per second, use 25\
-flags cgop+ilme -sc_threshold 1000000000 #MPEG codec options\
-b:v 2M #Video *codec data* bit rate (defines video quality). Must be lower than stream bit rate, so < muxrate-(audio bitrate)\
-minrate:v 2M -maxrate:v 2M #enforce constant video bit rate\
-acodec mp2 -ac 2 -b:a 192k #audio codec, quality and bitrate\
-muxrate ${RATE FROM TOOL}
-f mpegts #specify you want a MPEG Transport Stream container as output\
outputfile.ts
I'm trying to capture and stream video from a 5MP USB camera using ffmpeg 3.2.2 on Windows. Here's the command line that I'm using:
ffmpeg -f dshow -video_size 320x240 -framerate 30 -i video="HD USB Camera" -vcodec libx264 -preset ultrafast -tune zerolatency -g 60 -f mpegts udp://192.168.1.100:10000
The destination for my stream (an Ubuntu box on the same subnet) is running ffplay via:
ffplay -i udp://127.0.0.1:10000
This works but the video stream seems like it's delayed by 8 - 10 seconds. It's my understanding that the destination can't begin displaying the stream until it receives an I-frame so I tried specifying a GOP value of 60 thinking that this would cause an I-frame to be inserted every 2 seconds (# 30 FPS).
The Windows machine that's doing the transcoding is running an i7-3840QM # 2.80GHz and has 32 GB RAM. FFmpeg appears to be using very little CPU (like 2%) so it doesn't seem like it's CPU bound. Just as a test, I tried ingesting an MP4 file and not doing any transcoding (ffmpeg -re -i localFile.mp4 -c copy -f mpegts udp://192.168.1.100:10000) but it still takes several seconds before the stream is displayed on the Ubuntu system.
On a related note, I'm also evaluating a trial version of the Wowza Streaming Engine server and when I direct my ffmpeg stream to Wowza, I get the same 8 - 10 second delay before the Wowza test player starts playing it back. For what it's worth, once the stream starts playing, it seems to be running fine (other than the fact that everything is "behind" by several seconds).
I'm new to video streaming so I might be missing something obvious here but can anyone tell me what might be causing this delay or suggest how I might further troubleshoot the problem? Thank you!
Try setting this values:
analyzeduration integer (input)
Specify how many microseconds are analyzed to probe the input. A
higher value will enable detecting more accurate information, but will
increase latency. It defaults to 5,000,000 microseconds = 5 seconds.
probesize integer (input)
Set probing size in bytes, i.e. the size of the data to analyze to get
stream information. A higher value will enable detecting more
information in case it is dispersed into the stream, but will increase
latency. Must be an integer not lesser than 32. It is 5000000 by
default.
FFmpeg docs
I'm looking to use Janus Gateway to stream very low latency to a thousand viewers from a single source.
I'm aiming for VP8 video streaming since H.264 support hasn't dropped in Chrome yet.
My config is
[gst-rpwc]
type = rtp
id = 1
description = Test Stream
audio = no
video = yes
videoport = 8004
videopt = 100
videortpmap = VP8/90000
I'm testing initially on OSX with the built in webcam. This is the pipeline
ffmpeg -f avfoundation -video_size 640x480 -framerate 30 -i "0" -b:v 800k -c:v libvpx rtp://x.x.x.x:8004
But my CPU on a Retina Macbook Pro is at 100% the entire time and I'm only getting a few frames every few seconds on the client end. I believe the conversion from the built in iSight camera to VP8 is too intensive. Is there a way to make this conversion more effecient?
I'm no expert on Janus, but for a WebRTC VP8 stream, the videofmtp you have doesn't make sense as that string is for h.264 and to a lesser extent, the videopt isn't what I've seen for VP8, that value should be 100. The biggest issue here is that ffmpeg can't do DTLS, so even with the mods I've specified, this will probably not work.
I need convert all videos to my video player (in website) when file type is other than flv/mp4/webm.
When I use: ffmpeg -i filename.mkv -sameq -ar 22050 filename.mp4 :
[h264 # 0x645ee0] error while decoding MB 22 1, bytestream (8786)
My point is, what I should do, when I need convert file type: .mkv and other(not supported by jwplayer) to flv/mp4 without quality loss.
Instead of -sameq (removed by FFMpeg), use -qscale 0 : the file size will increase but it will preserve the quality.
Do not use -sameq, it does not mean "same quality"
This option has been removed from FFmpeg a while ago. This means you are using an outdated build.
Use the -crf option instead when encoding with libx264. This is the H.264 video encoder used by ffmepg and, if available, is the default encoder for MP4 output. See the FFmpeg H.264 Video Encoding Guide for more info on that.
Get a recent ffmpeg
Go to the FFmpeg Download page and get a build there. There are options for Linux, OS X, and Windows. Or you can follow one of the FFmpeg Compile Guides. Because FFmpeg development is so active it is always recommended that you use the newest version that is practical for you to use.
You're going to have to accept some quality loss
You can produce a lossless output with libx264, but that will likely create absolutely huge files and may not be decodeable by the browser and/or be supported by JW Player (I've never tried).
The good news is that you can create a video that is roughly visually lossless. Again, the files may be somewhat large, but you need to make a choice between quality and file size.
With -crf choose a value between 18 to around 29. Choose the highest number that still gives an acceptable quality. Use that value for your videos.
Other things
Add -movflags +faststart. This will relocate the moov atom from the end of the file to the beginning. This will allow the video to begin playback while it is still being downloaded. Otherwise the whole video must be completely downloaded before it can begin playing.
Add -pix_fmt yuv420p. This will ensure a chroma subsampling that is compatible for all players. Otherwise, ffmpeg, by default and depending on several factors, will attempt to minimize or avoid chroma subsampling and the result is often not playable by non-FFmpeg based players.
convert all mkv to mp4 without quality loss (actually it is only re-packaging):
for %a in ("*.mkv") do ffmpeg.exe -i "%a" -vcodec copy -acodec copy -scodec mov_text "%~na.mp4"
For me that was the best way to convert it.
ffmpeg -i {input} -vcodec copy {output}
I am writing a script in python that appends multiple .webm files to one .mp4. It was taking me 10 to 20 seconds to convert one chunk of 5 seconds using:
ffmpeg -i {input} -qscale 0 copy {output}
There's some folders with more than 500 chunks.
Now it takes less than a second per chunk. It took me 5 minutes to convert a 1:20:00 long video.
For MP3, the best is to use -q:a 0 (same as '-qscale 0'), but MP3 has always loss quality.
To have less loss quality, use FLAC
See this documentation link
I'm on Windows 7 and i have many .MP4 video that i want to convert on .flv. I have try ffmpeg and Free FLV converter, but each time the results are not what i'm looking for.
I want a video of same quality (or almost, looking good) and a more little size for the video, because right now, each time i have made a try, the video result is pretty bad and the video size just increase.
How can i have a good looking video, less in size and in .FLV ?
Thanks a lot !
First, see slhck's blog post on superuser for a good FFmpeg tutorial. FLV is a container format and can support several different video formats such as H.264 and audio formats such as AAC and MP3. The MP4 container can also support H.264 and AAC, so if your input uses these formats then you can simply "copy and paste" the video and audio from the mp4 to the flv. This will preserve the quality because there is no re-encoding. These two examples do the same thing, which is copying video and audio from the mp4 to the flv, but the ffmpeg syntax varies depending on your ffmpeg version. If one doesn't work then try the other:
ffmpeg -i input.mp4 -c copy output.flv
ffmpeg -i input.mp4 -vcodec copy -acodec copy output.flv
However, you did not supply any information about your input, so these examples may not work for you. To reduce the file size you will need to re-encode. The link I provided shows how to do that. Pay special attention to the Constant Rate Factor section.