sws_scale performance comparison to media players real time resize - ffmpeg

While playing 4K video user can resize players window - and result image will be scaled smoothly in run time.
On the other hand - program written with libav which reads 4k video file frame by frame and scale it down with sws_scale function do it less effective: it took more time then video duration to resize it.
Why is it so? Maybe because player fps is less and some frames are skipped - but video still looks smooth?

This is because most video players do scaling in the video card's hardware. With GL, for example, scaling (or even format conversion from YUV to RGB) is free.

Related

Is possible to make watermark film with libav without decoding full video?

Ther is a small png image and a video film.
I want to overlay this png image into video film and I did it by libavcodec.
The CPU loading of this overlay process is extermely high.
To reduce the performance impact, I have the png overlaid in 10 seconds and then copy stream from old file to new ones after 10 seconds.
The problem is, the video is ok before 10 seconds with png overlaid video.
After 10 seconds, the video is stalled and begin to show abnormal green block at top of the screen.
I figure out the spspps maybe different between the original stream and the transcoded stream. Is this a root to cause the video stalled after 10 seconds?
Thank you.

rescale with ffmpeg but keep the pixel density constant

I have a 1280x720 video resolution
After I resized the video output was blurred more than the original video
How to output video is clearer
There is no pixel density for a media displayed on a electronic display.
The physical space occupied by a video depends on
physical size of the display
display's current resolution
window size of the video player
A video when rescaled to a higher resolution will lose sharpness.

How does video players display video grater than native resolution of monitor?

I have a 1920x1080 resolution MP4 video file. It is encoded using H.264 video codec.
My monitor's native resolution is 1280x780. I am able to play this video file in vlc player or using totem player without any issue.
Can somebody explain me how video players display video files larger than the monitor's native resolution?
Image scaling algorithms can be used at different levels: by video player itself, by operating system, even by monitor hardware. Simplest method of image scaling is "nearest-neighbor scaling", picking the nearest pixel colour. There are some advanced techniques, however, you can find them in this article.
Stretching it down. That is, leaving some pixels out of the painting. The same as resizing down an image with a paint program.

Display Colorframe in kinect in Full screen

I want to display the kinect color frame in wpf with full screen , but when i am trying it ,
I got only very less quality video frames.
How to do this any idea??
The Kinect camera doesn't have great resolutions. Only 640x480 and 1280x960 are supported. Forcing these images to take up the entire screen, especially if you're using a high definition monitor (1920x1080, for example), will cause the image to be stretched, which generally looks awful. It's the same problem you run into if you try to make any image larger; each pixel in the original image has to fill up more pixels in the expanded image, causing the image to look blocky.
Really, the only thing to minimize this is to make sure you're using the Kinect's maximum color stream resolution. You can do that by specifying a ColorImageFormat when you enable the ColorStream. Note that this resolution has a significantly lower number of frames per second than the 640x480 stream (12 FPS vs 30 FPS). However, it should look better in a fullscreen mode than the alternative.
sensor.ColorStream.Enable(ColorImageFormat.RgbResolution1280x960Fps12);

Best video codec for smooth 1920x1080 playback on older machines (quality not important)

I'm new to Video technology, so any feedback (such as if I've underspecified the problem) would be greatly appreciated.
I need to display an animation (currently composed of about a thousand PNGs) on Windows and am trying to determine the best video codec or parameters for the job.
Video playback must be smooth at 30 fps
Output display is 1920x1080 on a secondary monitor
Quality does not matter (within limits)
Will be displaying alpha blended animation on top, so no DXVA
Must run on older hardware (Core Duo 4400 + nVidia 9800)
Currently using DirectShow to display the video.
Question:
Is it easier on the CPU to shrink the source to 1/2 size (or even 1/4) and have the CPU stretch it at run time?
Is there a video codec that is easier on the CPU than others?
Are there parameters for video codecs that mean less decompression is required? (The video will be stored on the HD, so size doesn't matter except as it impacts program performance).
So far:
- H.264 from ffmpeg defaults produces terrible tearing and some stuttering.
- Uncompressed video from VirtualDub produces massive stuttering.
There are so many different degrees of freedom to this problem, I'm flailing. Any suggestions by readers would be much appreciated. Thank you.
MJPEG should work. I used it for 1080i60 some 3 years back, and the playback was never an issue. Even encoding worked on-the-fly with a machine of quite similar performance to what you describe.
File size will be about 10MB/s for good quality video.
Shrinking the video will help, because if you are drawing the video to screen using e.g. DirectX, you can use the GPU to stretch it.

Resources