Recently, I've built ffmpeg(ver 4.1.1) 32bit on my Windows, trying to put SRT stream to Internet.When I use -dshow command, I can use both -list_device and -list_options to find my camera correctly.But once I use ffplay to play my Camera,it doesn't respond.
On the other hand, I can use vfwcap command to play my Camera successfully through "ffplay -f vfwcap -i 0",but this is not what I want. Because I also want to put my Microphone device at the same time.
This problem has bothered me for a week that I can't find the right solution over Internet.I've tried to add the parameters which my camera may need.But it appear the same problem.
If you have time, can you try to help me?Or can you guide me where the problem may arise.I will be very grateful for your help.I look forward to your reply.
Actually,my final target is to use dshow cammand to display my camera and push the stream using SRT at the same time.So that I can pull the SRT stream at another computer to compare two screen and then get the delay Intuitively and visually.I've built SRT(32bit) and ffmpeg(32bit v4.1.1) successfully,but my dshow command seems like that it doesn't work well.
Related
I discovered some damaged AVI files that VLC complains about broken index when I try to play them. I can play directly without ability to scroll the timeline or wait...wait... for the index to be built (but not saved) and play normally. Some other players can play them without complaining, others refuse to play.
I can solve the problem seamlessly in VirtualDub by opening the .avi with "extended options" in Open with "re-derive keyframe flags" and then saving a new .AVI file with
direct-stream-copy for video and audio. The resulting file plays perfectly.
I can also solve the problem with ffmpeg but not without problems.
ffmpeg -i INFILE -vcodec copy -acodec copy OUTFILE
Important: only stream copy and same container are of interest.
The resulting file plays in VLC without complaints or the next problem, but in many other players when jumping on the timeline the video gets distorted immediately at the jump destination and stays heavily distorted until the next I frame in the stream. All this doesn't happen when it was processed with VirtualDub.
ffmpeg is faster but most importantly it is scriptable and one could make automation for many files. With VirtualDub one has to manually process each file and wait a looooooong time for the open process to re-derive keyframe flags first. Wouldn't mind if ffmpeg speed was lost because of the automation it can provide.
So far I only found a very old unanswered mailing list post here
Can ffmpeg fix such files, without the afore mentioned problem? If yes, how?
Thank you.
AVI file indexes contain all frames (key or not), but they have a flags field (which FFmpeg fills in) which should help players seek only to keyframes. I don't have access to your exact file (ffprobe information would be helpful), but we can assume the flags field is not written correctly, e.g. it might be set for every frame or for none at all.
VLC likely parses the codec packets to derive the keyframe flag if absent in the container, but other players might not. I think what you're looking for is to derive keyframe flags while stream-copying. The exact commandline depends a bit on the codec. For example, for H264 you'd want to dump to annex-B as intermediate file format, and then re-read that so the H264 parser is invoked, which sets the keyframe flag, and then re-mux that into AVI - but H264 in AVI is rare so that's probably not what's happening here.
So for a solution, I will need the output of ffprobe $file so I know what codec the AVI file contains.
I recently asked baout how I could download segments of an online m3u8 file, and someone pointed out that this could be accomplished via ffmpeg:
ffmpeg -i [LINK] -codec copy [OUTPUT FILE] #downloads only audio segments;
ffmpeg -i [LINK] -bsf:a aac_adtstoasc -vcodec copy -c copy -crf 50 [OUTPUT] #downloads audio and video segments
For those who aren't familiar, m3u8 is formatted kinda of like a "playlist", with an m3u8 file pointing to a bunch of smaller "segments" which are pieced together to form the whole of the video. As a result, it's completely possible to halt the above commands partway through their execution and still produce a watchable video (i.e. one that will be interpreted correctly by video editors).
I'm wondering if there's a built-in method with ffmpeg that allows me to grab segments N-M of a given m3u8. If there are methods outside of ffmpeg, feel free to mention them as well. Thanks for the help.
After having looked into it, I can say that this isn't possible via ffmpeg. You could theoretically use the -ss and -t parameters to specify a starting point and duration, but ffmpeg appears to look at every clip up until the specified endpoint, making the download process prohibitively long.
If you want to download only a specific number of segments, you need to look at the m3u8 file, find its associated media list, and download segments from that media list.
Anyone could help me? Have been trying to record video from an RTSP server using FFMPEG but somehow the video result has many frozen images (couldn't be used for any people detection) - the people look similar to this:
Here is the code I used:
ffmpeg -i rtsp://10.10.10.10/encoder1 -b:v 1024k -s 640x480 -an -t 60 -r 12.5 output.mp4
What I have done so far?
- Recorded the video in smaller dimension instead of original one
- No audio and lower FPS
- Even only record from two IP sources on a machine
But still didn't get any luck yet. Anyone ever experience this?
How can I use FFMPEG to add a delay to a stream being sent from a (v4l2) webcam to a media server?
The use case here is something like a security camera where I want to be able to stream video to a server when something is detected in the video. The easiest way to ensure the event of interest is captured on the video is to use FFMPEG to stream from the camera to a virtual loopback device with an added delay. That loopback device can then be used to initiate live streaming when an even of interest occurs.
In GStreamer, I would accomplish a delay of this sort with the queue element's min-threshold-time parameter. For example the following (much-simplified) example pipeline adds a 2 second delay to the output coming from a v4l2 webcam before displaying it:
gst-launch-1.0 v4l2src device=/dev/video1 ! queue max-size-buffers=0 max-size-time=0 max-size-bytes=0 min-threshold-time=2000000000 ! xvimagesink
How do I accomplish the same thing with FFMPEG? There are some technical challenges that prevent us from using GStreamer for this.
I have investigated the itsoffset option for this, but as far as I can tell it is only usable for already-recorded files, and it is not clear what a good alternative would be.
With a recent git build of ffmpeg, basic template is
ffmpeg -i input -vf tpad=start_duration=5 -af adelay=5000|5000 stream-out
The tpad filter will add 5 seconds of black at the start of the video stream, and the apad filter will add 5000 milliseconds of silence to the first two channels of the audio.
I'm attempting to copy videos from a site. They are stored in 6 different resolutions, as an hls stream format. When I use the command ffmpeg -i http://c.brightcove.com/services/mobile/streaming/index/master.m3u8?videoId=5506754630001 -c copy output.ts I get the highest quality (1280x720). However, when I wget the .m3u8 I can see there are other qualities but am having trouble with how to copy those quality (i.e. 640x380). the original link is http://www.sportsnet.ca/hockey/nhl/analyzing-five-potential-trade-destinations-matt-duchene/.
I'm hoping someone can help me out with this. Thank you.
I don't know if it's of any help but
ffmpeg -i http(s)//link/to/input.m3u8 -map m:variant_bitrate:BITRATE -c copy output.ts
is a valid approach for selection of the quality.
The variant_bitrate meta tag is documented here: FFmpeg Formats Documentation#applehttp.
The stream specifiers that can be used via -map option are documented here:
ffmpeg Documentation#5.1 Stream specifiers
This means you need to know the BITRATE of the master, which can be a bit more complicated...
If it's still of interest, I can get back with a python 3.6 script that would require an external module...
or you have to manually check which bitrate you need
(in browser or with ffprobe-i http(s)//link/to/input.m3u8 )
If anyone knows more about this it would be nice to know when this variant_bitrate meta tag was implemeted as I'm quite sure that this wasn't always possible...