Python pygame continuos image loading FPS - image

Using pygame on a linux machine, continuously loading new images and displaying them slows down the program.
The input images (400x300) are in PPM format to keep the file size constant (360K) - not to affect IO and avoid any decompression delays.
It starts off at 50 frames per second and then after around 2 minutes its around 25 frames per second.
import pygame
pygame.init()
clock = pygame.time.Clock()
screen = pygame.display.set_mode((800, 600),pygame.FULLSCREEN)
frame=1
while 1:
image = pygame.image.load(str(frame)+".ppm")
screen.blit(image,(0,0))
pygame.display.flip()
clock.tick(240)
frame=frame+1
if(frame%10==0):
print(clock.get_fps())
What can be done to keep the frame rate more consistent?
Most likely it has something to do with old references to images that need to be garbage collected. Maybe not.
Is there anyway to load images continuously without creating new objects and triggering the garbage collector or whatever is slowing the system down?

After many weeks of pondering, I think I finally figured out what your problem is. For some reason, the computer must be remembering the old values of image. After the line that blits, put
del image
I'm not entirely sure, but it might work.

import pygame
pygame.init()
clock = pygame.time.Clock()
screen = pygame.display.set_mode((800, 600),pygame.FULLSCREEN)
frame=1
global image
image = pygame.image.load(str(frame)+".ppm")
#this image can be used again and again
#you can also give ////del image//// but it will load from the first so it takes time but using again and again doesn't do so
while 1:
screen.blit(image,(0,0))
pygame.display.flip()
clock.tick(240)
frame=frame+1
if(frame%10==0):
print(clock.get_fps()):

Related

Metal on macOS – Monitor refresh sync?

I am in the process of converting some old OpenGL code to Metal.
At the moment I am using a MTKView to render a memory buffer to a window. I'm using it with paused = YES, enableSetNeedsDisplay = NO, and manual calls to draw() from my rendering loop.
Everything appears to be working, except for the fact that I am limited to 60 frames per second for no obvious reason. I suspect that Metal is synchronising to the monitor refresh when I don't want it to.
When I resize the window my frame rate temporarily jumps to 150+ frames per second, which tells me that the limit is not of my making.
Does anyone know how to stop this frame rate limit? I have tried setting preferredFramesPerSecond to different values (both lower and higher) but this seems to have no effect.
Thanks in advance for any pointers.
Typically enough I figured it out a few minutes after asking the question:
CAMetalLayer *c = self.layer;
c.displaySyncEnabled = NO;

measure elapsed time of image loading in Corona SDK

I'm trying to benchmark the loading of large images in Corona SDK.
local startTime = system.getTimer()
local myImage = display.newImageRect( "someImage.jpg", 1024, 768 )
local endTime = system.getTimer()
print( endTime - startTime ) -- prints 8.4319999999998
This returns values of around 8 ms. I know it takes longer to load an display an image because if it really took 8 ms I wouldn't notice the delay, but I do notice it. I'd say it takes about 300 ms.
Also the FPS drop drastically when loading a large image. I'm monitoring this using an enterFrame event and when loading the image it prints values of around 0.3 for 1 frame.
Runtime:addEventListener( "enterFrame", myListener )
function onEnterFrame (event)
print( display.fps )
end
The frame takes a long time to render when loading, even when the loading of the image takes less than 1/60 of a second. I guess it means the rendering is happening asynchronously somewhere else.
So, how can I measure the time it takes to really load and display an image?
Since Corona SDK is closed source, we'll have to use the docs and imagination.
I see three possibilities here:
Corona is doing what it says, and your subjective experience is wrong.
Corona is loading the images in a background thread, so the call to display.newImageRect is non-blocking: it "starts" loading the image, and then continues. When this happens in other SDKs (mostly javascript-based ones) you get a "ready callback" that you can use on the image object, but I could not find such thing in the docs.
Corona loads the image quickly, but requires "extra work afterwards". For example, it generates lots of garbage which has to be garbage-collected. So the image gets loaded fast, but then this "extra work" slows down the app.
My bet is on 3. But this doesn't really matter. Independently of which one of these options is causing the slowdowns, they can be solved the same way: instead of loading the images right before you draw them, you have to preload them.
I don't use Corona SDK, but a quick google pointed me to the storyboard module, in particular to storyboard.loadScene.
Create a new scene, list all the images that you need on it, and load it before showing it - that way image loading will be done in advance, not slowing down your app.
Most likely the image is rendered during the scene's rendering loop. There is no event to indicate that an image has been rendered. However if you create the display object in the scene's create event handler or a button click handler, and register an enterFrame event handler, you can measure the time between that and the first frame event. I can't try this here but my guess is this will give you an estimate of the time to render the image. Dont use FPS. Larger image will probably give you a larger measurement. If you measure the time between enterFrame events you will probably find that it is much smaller than the time between create/click event and the first frame event, or between the first two frame events after the create/click event. Post a comment if you would like to see some example code.

Matplotlib animation: setting fps=90 creates a movie file properly, but setting fps=120 does not?

Could you help me understand why this might be the case? This is what my command looks like:
ani.save("animation.avi", codec="libx264", fps=120)
The movie file size with the above command is very small in the order of hundreds of kilobytes. Playing the movie just shows a static picture (the very first frame).
ani.save("animation.avi", codec="libx264", fps=90)
This on the other hand, creates a movie file with a reasonable size in the order of megabytes, and plays as an animation as well.
At first, I thought this was an issue with a specific writer (avconv), but this problem occurs even when I have no writer specified, so it might be a general issue with my specific case, or matplotlib.

Detecting face very fast from video using vision.CascadeObjectDetector in matlab

I wrote matlab code for face detection.In my code it is detecting face for first 100 frames and it crop faces from each frame and saves it in database folder.Problems iam facing
1.Detecting frame by frame is very slow.Is there any idea to run faster since i have to work on 4000 frames.
2.In my database folder it has to show 1 to 100 face images but it is not showing 11th and 12th face images directly it showing 13th face image after 10th image.23rd face image is blurr.Likewise so many images are missing and some are blurr.Last image number it is showing as 216.But total 106 face images are there in database folder.In that 12 images are blurr.Remaining are correct images.
clc;
clear all;
obj=vision.VideoFileReader('basu.avi');
for k=0:99
videoFrame = step(obj);
%using viola-jones algorithm
FaceDetect = vision.CascadeObjectDetector;
%FaceDetect
BB = step(FaceDetect,videoFrame);
%BB
figure(2),imshow(videoFrame);
for i = 1:size(BB,1)
rectangle('Position',BB(i,:),'LineWidth',3,'LineStyle','-','EdgeColor','r');
end
%crop faces and convert it to gray
for i = 1:size(BB,1)
J= imcrop(videoFrame,BB(i,:));
I=rgb2gray(imresize(J,[292,376]));
%save cropped faces in database folder
filename = ['G:\matlab_installed\bin\database\' num2str(i+k*(size(BB,1))) '.jpg'];
imwrite(I,filename);
end
end
There are a few of things you can try:
Definitely move FaceDetect = vision.CascadeObjectDetector; outside of the loop. You only need to create the face detector object once. Re-creating it for every frame is definitely your performance bottleneck.
vision.VideoFileReader returns a frame of class 'single' by default. If you change the output data type to 'uint8', that should speed up the face detector. Use obj=vision.VideoFileReader('basu.avi', 'VideoOutputDataType', 'uint8');
vision.VideoFileReader can also do the conversion to grayscale for you. Use obj=vision.VideoFileReader('basu.avi', 'VideoOutputDataType', 'uint8', 'ImageColorSpace', 'Intensity'); This may be faster than calling rgb2gray.
Try limiting the size of the faces being detected using 'MinSize' and 'MaxSize' options of vision.CascadeObjectDetector and/or try downsampling the frame before detecting faces.

How does software like GotoMeeting capture an image of the desktop?

I was wondering how do software like GotoMeeting capture desktop. I can do a full screen (or block by block) capture using GDI but that just seems too wasteful to me. Also I have looked into Mirror devices but I was wondering if there's a simpler technique or a library out there which does this.
I need fast and efficient desktop screen capture (10p15 fps) which I am eventually going to convert into a video file and integrate with my application to send the captured feed over the network or something.
Thanks!
Yes, taking a screen capture and finding the diff between previous capture would be a good way to reduce bandwidth on transmission by sending the changes across only, of course this is similar to video encoding techniques which does this block by block.
Still means you need to do a capture plus extra processing for getting the difference, i.e, encoding it.
by using the Mirror devices you can get both the updated Rectangle that are change and also pointer to the Screen. updated Rectangle pointer point to all the rectangle that are change , these rectangle are the change rectangle that are frequently changing. filter out some of the rectangle because in one second you can get 1000 of rectangles.
I would either:
Do full screen captures, and then
perform image processing to isolate
parts of the screen that have changed
to save bandwidth.
-OR-
Use a program like CamStudio.
i got 20 to 30 frame per second using memory driver i am display on my picture box but when i get full screen update then these frame are buffered. as picture box is slow and have change to my own component this is some how fast but not good on full screen as well averge i display 10 fps in full screen. i am facing problem in rendering frames i can capture 20 to 30 frames per second but my rendering is 8 to 10 full screen frames in one second . if any one has achive rendering frames in full screen please replay me.
What language?
.NET provides Graphics.CopyFromScreen.

Resources