Looks like MJPEG is "just a sequence of the compressed JPEG images". If they are served real time from the server, the frame rate probably can be defined by the streaming speed. But if it is a .mjpeg file in the filesystem, who defines the frame rate? Or is it so that this format cannot be represented in a form of the file and can only be a server-side stream?
It is probably not defined. If such file must be played locally (for debug purposes, etc), it can be converted with ffmpeg that can assume any transfer rate via bitrate switch (-b).
ffmpeg -i source_file.mjpeg -b 4000k -vcodec libx264 destination_file.mp4
The produced file then can be played the way it would appear as coming from the server:
cvlc destination_file.mp4
Related
I can download http://www.w6rz.net/adv8dvbt23.ts.
And there are many samples for dvbt sample ts files.
But, I want to convert my video file to TS file for dvbt.
First, I checked on google, but I cannot find any answer.
I think, this does not make sense, or, the way of thinking may have been wrong.
FFmpeg can used for this?
but, there is no any parmameter for Transmit mode, QAM / 64QAB, guard interval.
FFmpeg can used for this? but, there is no any parmameter for Transmit mode, QAM / 64QAB, guard interval.
As I explained already:
ffmpeg doesn't know anything about RF things like Constellation type; it is just a tool to transcode between different video formats. .ts is for "transport stream", and it's the video container format that DVB uses. The GNU Radio transmit flowgraphs on the other hand know nothing about video things – all they do is take the bits from a file. So that file needs to be in a format that the receiver would understand, and that's why I instructed you to use FFMPEG with the parameters you need. Since I don't know which bitrate you're planning on transmitting, I can't help you with how to use ffmpeg
So, you need to generate video data that your DVB-T receiver understands, but more importantly even, you need to put them in a container that ensures constant bitrate.
As pointed out in a different comment to your ham.stackexchange.com question about the topic, your prime source of examples would be GNU Radio's own gr-dtv module; when you look into gnuradio/gr-dtv/examples/README.dvbt, you'll find a link to https://github.com/drmpeg/dtv-utils , W6RZ's own tooling :)
There you'll find the tools necessary to calculate the exact stream bitrate you need your MPEG transport stream to have. Remember, a DVB-T transmitter has to transmit at a constant bits per second, so your video container must be constant-bitrate. That's why a transport stream pads the video data to achieve constant rate.
Then, you'll use ffmpeg to transcode your video and put into the transport stream container:
ffmpeg -re -i inputvideo.mpeg \
-vcodec mpeg2video \
-s 720x576 #resolution; this is a good choice, since most TVs will deal with it \
-r 25 #frames per second, use 25\
-flags cgop+ilme -sc_threshold 1000000000 #MPEG codec options\
-b:v 2M #Video *codec data* bit rate (defines video quality). Must be lower than stream bit rate, so < muxrate-(audio bitrate)\
-minrate:v 2M -maxrate:v 2M #enforce constant video bit rate\
-acodec mp2 -ac 2 -b:a 192k #audio codec, quality and bitrate\
-muxrate ${RATE FROM TOOL}
-f mpegts #specify you want a MPEG Transport Stream container as output\
outputfile.ts
I am starting with a high res video file and I would like to create 3 variants, low quality, mid quality, and high quality for mobile streaming. I want these mid/low/high variants to be segmented into ts pieces that the m3u8 file will be pointing that. Is there a way to do this in one line in ffmpeg?
I have successfully generated an m3u8 file and ts segments with ffmpeg, do I need to do this 3x and set specs for low/mid/high? If so, how do I get the singular m3u8 file to point to all variants as opposed to one for each variant?
This is the command I used to generate the m3u8 file along with the ts segments.
ffmpeg -i C:\Users\george\Desktop\video\hos.mp4 -strict -2 -acodec aac -vcodec libx264 -crf 25 C:\Users\user\Desktop\video\hos_Phone.m3u8
Yes, you need to encode all variants and generate the media playlists first (the playlists containing the segments).
If you want you can do it in one command since ffmepg supports multiple inputs/outputs. Eg:
ffmpeg -i input \
... [encoding parameters 1] ... output1 \
... [encoding parameters 2] ... output2 \
....[encoding parameters 3] ... output3
You must provide the variants in multiple qualities/bitrates but the aspect ratio should remain the same. Keeping the aspect ratio was initially mandatory but in the latest HLS authoring guide it's downgraded to a recommendation.
All variant streams must be keyframe aligned so set a GOP size using the -g option, disable scene-cut detection and use a segment duration hls_time which is a multiple of your keyframe interval.
Once you have all 3x m3u8 media playlist you can manually create the master playlist which points to each media playlist.
Example from the Apple HLS documentation, you must change the bandwidth, codecs, resolution and playlist filenames according to your own encoding options:
#EXTM3U
#EXT-X-VERSION:6
#EXT-X-STREAM-INF:PROGRAM-ID=1,BANDWIDTH=2855600,CODECS="avc1.4d001f,mp4a.40.2",RESOLUTION=960x540
medium.m3u8
#EXT-X-STREAM-INF:PROGRAM-ID=1,BANDWIDTH=5605600,CODECS="avc1.640028,mp4a.40.2",RESOLUTION=1280x720
high.m3u8
#EXT-X-STREAM-INF:PROGRAM-ID=1,BANDWIDTH=1755600,CODECS="avc1.42001f,mp4a.40.2",RESOLUTION=640x360
low.m3u8
The Aspect ratio Does not have to be the same, that makes no sense.
How could you know what the client can play?
Aspect ratios are 4:3 for non-HD, 16:9 for a HD variants.
You don't want to do all your variants in one ffmpeg command if you
need segment times to be consistent.
Also watch transcoding downward, if you go from 1080 to 360, there
might be issues. One that I often get is that the audio degrades and
sounds weird. I try to go down no more than half, if I want high
quality.
#DavidC That hex is the codec version number.
I need convert all videos to my video player (in website) when file type is other than flv/mp4/webm.
When I use: ffmpeg -i filename.mkv -sameq -ar 22050 filename.mp4 :
[h264 # 0x645ee0] error while decoding MB 22 1, bytestream (8786)
My point is, what I should do, when I need convert file type: .mkv and other(not supported by jwplayer) to flv/mp4 without quality loss.
Instead of -sameq (removed by FFMpeg), use -qscale 0 : the file size will increase but it will preserve the quality.
Do not use -sameq, it does not mean "same quality"
This option has been removed from FFmpeg a while ago. This means you are using an outdated build.
Use the -crf option instead when encoding with libx264. This is the H.264 video encoder used by ffmepg and, if available, is the default encoder for MP4 output. See the FFmpeg H.264 Video Encoding Guide for more info on that.
Get a recent ffmpeg
Go to the FFmpeg Download page and get a build there. There are options for Linux, OS X, and Windows. Or you can follow one of the FFmpeg Compile Guides. Because FFmpeg development is so active it is always recommended that you use the newest version that is practical for you to use.
You're going to have to accept some quality loss
You can produce a lossless output with libx264, but that will likely create absolutely huge files and may not be decodeable by the browser and/or be supported by JW Player (I've never tried).
The good news is that you can create a video that is roughly visually lossless. Again, the files may be somewhat large, but you need to make a choice between quality and file size.
With -crf choose a value between 18 to around 29. Choose the highest number that still gives an acceptable quality. Use that value for your videos.
Other things
Add -movflags +faststart. This will relocate the moov atom from the end of the file to the beginning. This will allow the video to begin playback while it is still being downloaded. Otherwise the whole video must be completely downloaded before it can begin playing.
Add -pix_fmt yuv420p. This will ensure a chroma subsampling that is compatible for all players. Otherwise, ffmpeg, by default and depending on several factors, will attempt to minimize or avoid chroma subsampling and the result is often not playable by non-FFmpeg based players.
convert all mkv to mp4 without quality loss (actually it is only re-packaging):
for %a in ("*.mkv") do ffmpeg.exe -i "%a" -vcodec copy -acodec copy -scodec mov_text "%~na.mp4"
For me that was the best way to convert it.
ffmpeg -i {input} -vcodec copy {output}
I am writing a script in python that appends multiple .webm files to one .mp4. It was taking me 10 to 20 seconds to convert one chunk of 5 seconds using:
ffmpeg -i {input} -qscale 0 copy {output}
There's some folders with more than 500 chunks.
Now it takes less than a second per chunk. It took me 5 minutes to convert a 1:20:00 long video.
For MP3, the best is to use -q:a 0 (same as '-qscale 0'), but MP3 has always loss quality.
To have less loss quality, use FLAC
See this documentation link
I'm on Windows 7 and i have many .MP4 video that i want to convert on .flv. I have try ffmpeg and Free FLV converter, but each time the results are not what i'm looking for.
I want a video of same quality (or almost, looking good) and a more little size for the video, because right now, each time i have made a try, the video result is pretty bad and the video size just increase.
How can i have a good looking video, less in size and in .FLV ?
Thanks a lot !
First, see slhck's blog post on superuser for a good FFmpeg tutorial. FLV is a container format and can support several different video formats such as H.264 and audio formats such as AAC and MP3. The MP4 container can also support H.264 and AAC, so if your input uses these formats then you can simply "copy and paste" the video and audio from the mp4 to the flv. This will preserve the quality because there is no re-encoding. These two examples do the same thing, which is copying video and audio from the mp4 to the flv, but the ffmpeg syntax varies depending on your ffmpeg version. If one doesn't work then try the other:
ffmpeg -i input.mp4 -c copy output.flv
ffmpeg -i input.mp4 -vcodec copy -acodec copy output.flv
However, you did not supply any information about your input, so these examples may not work for you. To reduce the file size you will need to re-encode. The link I provided shows how to do that. Pay special attention to the Constant Rate Factor section.
I use ffmpeg to encode a video file to an mpeg transport stream (.ts), which is subsequently sent over network. If there is any network bandwidth fluctuation, I want to dynamically change the stream's bitrate.
My current solution involves restarting ffmpeg with a different bitrate as below
`ffmpeg -i input.avi -ss <resume point> -b:v <new bitrate> output.ts
Unfortunately, for certain i/p file formats, glitches get introduced in the video stream using the above approach. So I am looking for a solution where ffmpeg's output bitrate can be changed dynamically, possibly using signals