Remote Desktop's encoding is blurred when when scrolling quickly - ffmpeg

When scrolling the remote screen, the screen can easily become blurred.Is this because the fps is not big enough? Or the parameters I set are wrong, framerate:30, gop_size:60, but the fps is dynamically changing, often less than 30fps when scrolling. It will gradually become clear after the scroll is over, and it will become clear immediately if a key frame is sent.
I tried changing the frame rate, gop before creating the encoder, but no improvement.
The current codec is not fast enough, so the fps is not high enough. If it is a fps problem, I will find a way to increase fps.
Expect to know the reason of blur and how to fix it.
My respository https://github.com/21pages/hwcodec , waiting for advice.
see scroll blur gif here

Related

Is it possible to prevent tearing artifacts when drawing using GDI on a window with DWM composition?

I am drawing an animation using double-buffered GDI on a window, on a system where DWM composition is enabled, and seeing clearly visible tearing onscreen. Is there a way to prevent this?
Details
The animation takes the same image, and moves it right to left over the screen; the number of pixels across is determined by the difference between the current time and the time the animation started and the time to end, to get a fraction complete which is applied to the whole window width, using timeGetTime with a 1ms resolution. The animation draws in a loop without processing application messages; it calls the (VCL library) method Repaint which internally invalidates and then calls UpdateWindow for the window in question, directly calling into the message procedure with WM_PAINT. The VCL implementation of the paint handler uses BeginBufferedPaint. Painting is itself double-buffered.
The aim of this is to have as high a frame-rate as possible to get a smooth animation across the screen. (The drawing uses double-buffering to remove flickering and to ensure a whole image or frame is onscreen at any one time. It invalidates and updates directly by calling into the message procedure, without doing other message processing. Painting is implemented using modern techniques (eg BeginBufferedPaint) for Aero composition.) Within this, painting is done in a couple of BitBlt calls (one for the left side of the animation, ie what's moving offscreen, and one for the right side of the animation, ie what's moving onscreen.)
When watching the animation, there is clearly visible tearing. This occurs on Windows Vista, 7 and 8.1 on multiple systems with different graphics cards.
My approach to handle this has been to reduce the rate at which it is drawing, or to try to wait for VSync before painting again. This might be the wrong approach, so the answer to this question might be "Do something else completely: X". If so, great :)
(What I'd really like is a way to ask the DWM to compose / use only fully-painted frames for this specific window.)
I've tried the following approaches, none of which remove all visible tearing. Therefore the question is, Is it possible to avoid tearing when using DWM composition, and if so how?
Approaches tried:
Getting the monitor refresh rate via GetDeviceCaps(Application.MainForm.Handle, VREFRESH); sleeping for 1 / refresh rate milliseconds. Slightly improved over painting as fast as possible, but may be wishful thinking. Perceptually slightly less smooth animation rate. (Tweaks: normal Sleep and a high-resolution spin-wait using timeGetTime.)
Using DwmSetPresentParameters to try to limit updating to the same rate at which the code draws. (Variations: lots of buffers (cBuffer = 8) (no visible effect); specifying a source rate of monitor refresh rate / 1 and sleeping using the above code (the same as just trying the sleeping approach); specifying a refresh per frame of 1, 10, etc (no visible effect); changing the source frame coverage (no visible effect.)
Using DwmGetCompositionTimingInfo in a variety of ways:
While cFramesPending > 0, spin;
Get cFrame (frame composed) and spin while this number doesn't change;
Get cFrameDisplayed and spin while this doesn't change;
Calculating a time to sleep to by adding qpcVBlank + qpcRefreshPeriod, and then while QueryPerformanceCounter returns a time less than this, spin
All these approaches have also been varied by painting, then spinning/sleeping before painting again; or the reverse: sleeping and then painting.
Few seem to have any visible effect and what effect there is is hard to qualify and may just be a result of a lower frame rate. None prevent tearing, ie none make the DWM compose the window with a "whole" copy of the contents of the window's DC.
Advice appreciated :)
Since you're using BitBlt, make sure your DIBs are 4-bytes / pixel. With 3 bytes / pixel, GDI is horribly slow while DWM is running, that could be the source of your tearing. Another BitBlt issue I've run into, if your DIB is somewhat larger, than the BitBlt call make take an unexpectedly long time. If you split up one call into smaller calls than only draw a portion of the data, it might help. Both of these items helped me for my case, only because BitBlt itself was running too slow, thus leading to video artifacts.

SFML Game initially slow, yet speeds up permanently when very little is drawn

So I have established the base for an SFML game using a tilemap. For the most part, I've optimized it so it can run at a good 60 fps. However, I can only get this 60 fps if at some point the map is halfway off the screen so there is less of it being rendered. This seems like it would make sense, less being drawn means it runs faster, but once the fps increases it stay permanently, even if I then make the entire screen rendering the map with the map. I can't understand this irregularity with the fps in that I either have to start the map slightly offset, or move the map offscreen for a moment to get a solid fps. Clearly there isn't a problem with the ability of my computer to render at this fps, as it can stay there once it starts, but I can't understand why the map has to be offscreen momentarily for it to achieve this speed.

How to read video frame buffer in windows

I am trying to create a small project wherein I need to capture/read the video frame buffer and calculate the average RGB value of the screen.
I don't need to write anything on the screen. I'm doing this in Windows.
Can anyone help me with any Windows API which will read the video frame buffer and calculate the average RGB value?
What I came to know is that I need to write a kernel driver which will have access to read the frame buffer.
Is this the only solution?
Is there any other way of reading frame buffer?
Is there an algorithm to calculate the RGB value from frame buffer data?
If you want really good performance, you might have to use directx and capture the backbuffer to a texture. Using mipmaps, it will automatically create downsamples all the way to 1X1. Justgrab the color of that 1 pixel and you're good to go.
Good luck, though. I'm working on implimenting this as we speak. I'm creating an ambient light control for my room. I was getting about 15FPS using device contexts and StretchBLT. Only got decent performance if I grabbed 1 pixel with GetPixel(). That's an i5 3570K # 4.5GHz
But with the directx method, you could technically get hundreds if not thousands of frames per second. (when I make a spinning triangle, my 660 gets about 24,000 FPS. It couldn't be TOO much slower, minus the CPU calls.)

Display Colorframe in kinect in Full screen

I want to display the kinect color frame in wpf with full screen , but when i am trying it ,
I got only very less quality video frames.
How to do this any idea??
The Kinect camera doesn't have great resolutions. Only 640x480 and 1280x960 are supported. Forcing these images to take up the entire screen, especially if you're using a high definition monitor (1920x1080, for example), will cause the image to be stretched, which generally looks awful. It's the same problem you run into if you try to make any image larger; each pixel in the original image has to fill up more pixels in the expanded image, causing the image to look blocky.
Really, the only thing to minimize this is to make sure you're using the Kinect's maximum color stream resolution. You can do that by specifying a ColorImageFormat when you enable the ColorStream. Note that this resolution has a significantly lower number of frames per second than the 640x480 stream (12 FPS vs 30 FPS). However, it should look better in a fullscreen mode than the alternative.
sensor.ColorStream.Enable(ColorImageFormat.RgbResolution1280x960Fps12);

Working with huge textures with static part

I'm writing iPad cocos2d game with animations.
Designer gave me frames for each animating character in png. I'm using TexturePacker to pack my textures. But one of character is very big (600x600 pixels). And there 200 frames of animation. So, it will be very big memory part if I will pack it with TP to some atlases. But really not all 600x600 pixels are changing. Character has only moving hands and legs.
I think, I should cut static part from frames and cut dynamic parts from each frame to decrease memory usage. Is there some existing instrument for this? Or there is some better way to do in my situation?
AFAIK, there is no instruments for such task. And 200 frames 600x600 pixels.... I am sure that you will not be able to place all these frames to memory with other textures as backgrounds, other effects, etc.. It is too much for mobile device. Even for iPad. You should ask your artist to reduce frames number and size as possible.
For example, few months ago I got animation with 200x300 pixel frames. And actually content was only about 100x100 pixels. All other place in these frames was filled with glow. After glow was removed, it was not look such cool as before, but it was good too. And reduced memory problems.
For others with same problem:
After all I refused Cocos2d and write game with video. Huge animations was prerendered in video file, small animations I overlayed with imageView.animationImages.
You can change video playbackTime to add interactions.

Resources