FFMPEG Command to format videos to TikTok's specs? - ffmpeg

I'm trying upload a video exported by windows video editor to tiktok. It's a .mp4 file, and while it does upload, it isn't "TikTok'd", meaning, it only takes up the middle of screen. I was wondering what the ffmpeg command would be to output a video to TikToks specs.
Here's how it currently looks.
And here's how I want it to look.

Use the crop filter to convert horizontal to vertical video:
ffmpeg -i input.mp4 -vf "crop=ih*(9/16):ih" -crf 21 -c:a copy output.mp4
This will make it 9:16 aspect ratio.
-crf controls quality. See FFmpeg Wiki: H.264. 18-23 should be a good enough for a TikTok video.
Audio is stream copied (-c:a copy). If you get an error because your audio isn't compatible with MP4 then remove -c:a copy and AAC will be automatically used instead.

Related

ffmpeg: filter_complex issue: 25fps vs. 60fps (with transparent VP9 webm)

I have a lot of created png images with transparency / alpha channel.
These need to be converted to a webm file with the VP9 codec to have a video with transparent areas.
The output framerate should be 60fps. But filter_complex (which i need to select the yuva420p format) apparently has an issue with that.
I found an answer how to fix that; tried to adapt it to my case but it did not work.
The 1055 source png files should create a video with 60fps and a duration of 17.6 sec. But the result is a file with 42.2 sec (1055 frames at 25fps); according to ffmpeg 2530 frames were used.
So the video file itself has 60fps but the source frames are "stretched" to 25fps. Kind of a slow-mo video.
What do i need to change?
Or would it be possible to put the pngs into a VP9 webm with 60fps without using filter_complex at all?
The command i'm using by adapting another found answer is:
ffmpeg -hide_banner -i srcImgMasked_%05d.png -filter_complex "nullsrc=size=1920x1080:rate=60 [base]; [0]format=yuva420p [bla]; [base] [bla] overlay=shortest=0" -c:v libvpx-vp9 -crf 20 -b:v 0 -threads 0 result.webm -y

Using FFMPEG to losslessly convert YUV to another format for editing in Adobe Premier

I have a raw YUV video file that I want to do some basic editing to in Adobe CS6 Premiere, but it won't recognize the file. I thought to use ffmpeg to convert it to something Premiere would take in, but I want this to be lossless because afterwards I will need it in YUV format again. I thought of avi, mov, and prores but I can't seem to figure out the proper command line to ffmpeg and how to ensure it is lossless.
Thanks for your help.
Yes, this is possible. It is normal that you can't open that raw video file since it is just raw data in one giant file, without any headers. So Adobe Premiere doesn't know what the size is, what framerate ect.
First make sure you downloaded the FFmpeg command line tool. Then after installing you can start converting by running a command with parameters. There are some parameters you have to fill in yourself before starting to convert:
What type of the YUV pixel format are you using? The most common format is YUV4:2:0 planar 8-bit (YUV420p). You can type ffmpeg -pix_fmts to get a list of all available formats.
What is the framerate? In my example I will use -r 25 fps.
What encoder do you want to use? The libx264 (H.264) encoder is a great one for lossless compression.
What is your framesize? In my example I will use -s 1920x1080
Then we get this command to do your compression.
ffmpeg -f rawvideo -vcodec rawvideo -s 1920x1080 -r 25 -pix_fmt yuv420p -i inputfile.yuv -c:v libx264 -preset ultrafast -qp 0 output.mp4
A little explanation of all other parameters:
With -f rawvideo you set the input format to a raw video container
With -vcodec rawvideo you set the input file as not compressed
With -i inputfile.yuv you set your input file
With -c:v libx264 you set the encoder to encode the video to libx264.
The -preset ultrafast setting is only speeding up the compression so your file size will be bigger than setting it to veryslow.
With -qp 0 you set the maximum quality. 0 is best, 51 is worst quality in our example.
Then output.mp4 is your new container to store your data in.
After you are done in Adobe Premiere, you can convert it back to a YUV file by inverting allmost all parameters. FFmpeg recognizes what's inside the mp4 container, so you don't need to provide parameters for the input.
ffmpeg -i input.mp4 -f rawvideo -vcodec rawvideo -pix_fmt yuv420p -s 1920x1080 -r 25 rawvideo.yuv

How do I set buffer for MP4 in FFmpeg?

I convert videos to MP4 for my web player. My problem is: My videos don't buffer. I have to wait until the whole video is downloaded, and after that, I can play the video.
This is my exec() command:
ffmpeg -i uploaded_files/'.$le["file"].' -vcodec libx264 -pix_fmt yuv420p flash/'.$le["file"].'.mp4
Are there any options for buffering? My MP4 size and quality is good. But without buffering, it's bad.
Is this the fault of the exec() command I use?
My videos don't buffer. I have to wait until the whole video is downloaded, and after that, I can play the video.
Use the -movflags faststart option while encoding, e.g.
ffmpeg -i input.mp4 […] -movflags faststart output.mp4
Or, alternatively, run qt-faststart on the file.
The reason the files don't stream immediately is that their MOOV atom is at the end of the file, and in order to play it, the client needs to parse this info. qt-faststart will just move that atom and your files will start playing right away.

Changing resolution mid-video with FFMPEG

I have a source video (mpeg2video) which I'm transcoding to x264. The source contains 2 different programs recorded from TV. One is in 4:3 AR and the other 16:9 AR. When I play the source file through VLC the player correctly changes size to show the video at the correct AR. So far so good.
When I transcode the conversion process auto detects the AR from the first few frames and then transcodes the whole video using this AR. If the 16:9 section comes first then the whole conversion is done in 16:9 and the 4:3 section looks stretch horizontally. If the 4:3 section is at the start of the source file then the whole transcode is done in 4:3 and the 16:9 section looks squashed horizontally.
No black bars are ever visible.
Here's my command:
nice -n 17 ffmpeg -i source.mpg -acodec libfaac -ar 48000 -ab 192k -async 1 -copyts -vcodec libx264 -b 1250k -threads 2 -level 31 -map 0:0 -map 0:1 -map 0:2 -scodec copy -deinterlace output.mkv
I don't fully understand what's going on. How do I get the same 'change in AR' mid video in the output file that I have in the input video?
I don't think ffmpeg is designed to do that midway. You will have to write your own application using libav for it. The simpler way would be create two chunks of video that you combine.
EDIT:
The best way to deal with it is if you can detect the change of AR yourself and transcode the two segments seperately and join them.
EDIT2:
Use ffmpeg itself to chunk the video, demux anything you want and mux it back again. It should work fine. You needn't use avidemux.

Create MP4 video using FFMPEG and JPEG2000 frames

I'm trying to create an MP4 video with ffmpeg using JPEG2000 images as frames.
It works when the JPEG2000 is 8bpp, but I need it to work for at least 12 bits (ideally 12, but could be 16). The images are grayscale.
This is the command I'm using:
ffmpeg.exe -i imagen.jp2 video1.mp4
If I try to use -pix_fmt it says it's not supported by the encoder (it doesn't matter which format I use).
Some sample images can be found here:
http://ioingresodemanda.com/jp2.rar
I could also use any other tool, it doesn't need to be ffmpeg.
UPDATE: Adding ffmpeg output - http://pastebin.com/NyY3vgpz
Thanks in advance
If you are ok with mp4 file having a different video format the following will work
ffmpeg -strict -2 -i 12bit.jp2 -vcodec libx264 -an out.mp4
ffmpeg -strict -2 -i 12bit.jp2 -vcodec mpeg4 -an out.mp4
ffmpeg doesn't support 12-bit color. Most of the H264 profiles only support 8-bit color; a few support 10-bit, and only the super-obscure lossless Hi444PP profile supports 14-bit color. The x264 encoder does support some of the profiles with 10-bit color, but that's as far as it goes, and you have to explicitly enable it using the --bit-depth option:
http://git.videolan.org/?p=x264.git;a=commit;h=d058f37d9af8fc425fa0626695a190eb3aa032af
As noted in the commit, you may also want to keep in mind that "very few H.264 decoders support >8 bit depth currently".

Resources