How to avoid perceived flicker during scrolling in Qt? - performance

I'm trying use Qt framework(4.7.4) to demonstrate a sliding display in which new pixel data is added to first row of the screen and previous pixels are scrolled one pixel below in every refresh.
It is refreshed 20 times per second and in every refresh, random green points (pixels) are drawn on black background.
The problem is; there is highly noticeable flickers in every refresh. I have researched through the web and optimized my code as much as possible. I tried to use raster rendering with both QPainter (on QWidget) and QGraphicsScene(on QGraphicsView) and even I tried to use OpenGL rendering on QGLWidget. However, at the end I have still the same flicker problem.
What may cause this flickering? I begin to suspect that my LCD monitor can not refresh the display for black to green transitions. I have also noticed that if I select a gray background instead of black, there happens no flicker.

The effect you're seeing is purely psychovisual. It's a human defect, not a software defect. I'm serious. You can verify by fixing the value of x - you'll still be repainting the entire pixmap on the window, there won't be any flicker - because there is no flicker per se.
The psychovisual flicker occurs when the scroll rate is not tied to the passage of real time. When occasionally the time between updates varies due to CPU load, or due to system timer inaccuracies, our visual system integrates two images and it appears as if the overall brightness is changed.
You've correctly noticed that the perceived flicker is reduced as you reduce the contrast ratio of the image by setting the background to grey. This is an additional clue that the effect is psychovisual.
Below is a way of preventing this effect. Notice how the scroll distance is tied to the time (here: 1ms = 1pixel).
#include <QElapsedTimer>
#include <QPaintEvent>
#include <QBasicTimer>
#include <QApplication>
#include <QPainter>
#include <QPixmap>
#include <QWidget>
#include <QDebug>
static inline int rand(int range) { return (double(qrand()) * range) / RAND_MAX; }
class Widget : public QWidget
{
float fps;
qint64 lastTime;
QPixmap pixmap;
QBasicTimer timer;
QElapsedTimer elapsed;
void timerEvent(QTimerEvent * ev) {
if (ev->timerId() == timer.timerId()) update();
}
void paintEvent(QPaintEvent * ev) {
qint64 time = elapsed.elapsed();
qint64 delta = time - lastTime;
lastTime = time;
if (delta > 0) {
const float weight(0.05);
fps = (1.0-weight)*fps + weight*(1E3/delta);
if (pixmap.size() != size()) {
pixmap = QPixmap(size());
pixmap.fill(Qt::black);
}
int dy = qMin((int)delta, pixmap.height());
pixmap.scroll(0, dy, pixmap.rect());
QPainter pp(&pixmap);
pp.fillRect(0, 0, pixmap.width(), dy, Qt::black);
for(int i = 0; i < 30; ++i){
int x = rand(pixmap.width());
pp.fillRect(x, 0, 3, dy, Qt::green);
}
}
QPainter p(this);
p.drawPixmap(ev->rect(), pixmap, ev->rect());
p.setPen(Qt::yellow);
p.fillRect(0, 0, 100, 50, Qt::black);
p.drawText(rect(), QString("FPS: %1").arg(fps, 0, 'f', 0));
}
public:
explicit Widget(QWidget *parent = 0) : QWidget(parent), fps(0), lastTime(0), pixmap(size())
{
timer.start(1000/60, this);
elapsed.start();
setAttribute(Qt::WA_OpaquePaintEvent);
}
};
int main(int argc, char *argv[])
{
QApplication a(argc, argv);
Widget w;
w.show();
return a.exec();
}

I'd recommend you do not scroll the pixmap in-place, but create a second pixmap and use drawPixmap() to copy everything but one line from pixmap 1 to pixmap 2 (with the scroll offset). Then continue painting on pixmap 2. After the frame, exchange the references to both pixmaps, and start over.
The rationale is that copying from one memory area to a different one can be optimised more easily than modifying one memory area in-place.

Related

Correct way to draw zoomable audio waveform

I am trying to implement smooth zoomable audio waveform but am puzzled with the correct approach to implement zoom. I searched internet but there is very little or no information.
So here is what I have done:
Read audio samples from file and compute waveform points with samplesPerPixel = 10, 20, 40, 80, ....,10240. Store the datapoints for each scale (11 in total here). Max and min are also stored along with points for each samplesPerPixel.
When zooming, switch to the closest dataset. So if samplesPerPixel at current width is 70, then use dataset corresponding to samplesPerPixel = 80. The correct dataset index is easily found using log2(samplesPerPixel).
Use subsampling of the dataset to draw waveform points. So if we samplesPerPixel = 41 and we are using data set for zoom 80, then we use the scaling factor 80/41 to subsample.
let scaleFactor = 80.0/41.0
x = waveformPointX[i*scaleFactor]
I am yet to find a better approach and not too sure if the above approach of subsampling is correct, but for sure this approach consumes lot of memory and also is slow to load data at the start. How do audio editors implement zooming in waveform, is there an efficient approach?
EDIT: Here is a code for computing mipmaps.
public class WaveformAudioSample {
var samplesPerPixel:Int = 0
var totalSamples:Int = 0
var samples: [CGFloat] = []
var sampleMax: CGFloat = 0
}
private func downSample(_ waveformSample:WaveformAudioSample, factor:Int) {
NSLog("Averaging samples")
var downSampledAudioSamples:WaveformAudioSample = WaveformAudioSample()
downSampledAudioSamples.samples = [CGFloat](repeating: 0, count: waveformSample.samples.count/factor)
downSampledAudioSamples.samplesPerPixel = waveformSample.samplesPerPixel * factor
downSampledAudioSamples.totalSamples = waveformSample.totalSamples
for i in 0..<waveformSample.samples.count/factor {
var total:CGFloat = 0
for j in 0..<factor {
total = total + waveformSample.samples[i*factor + j]
}
let averagedSample = total/CGFloat(factor)
downSampledAudioSamples.samples[i] = averagedSample
}
NSLog("Averaged samples")
}
You should use power of 2 size of your data
This will allow you to use just cheap bit shifts and simple resizing without any costly floating point operations or integer multiplicatin and division.
You should do half resolution mipmaps using previous mipmap
This will always create one sample from 2 samples of previous mipmap so no nested for loops or costly index computations
Do not mix floating and integer computations if you can avoid it
even if you have FPU the conversion between int and float is usually very slow. Ideally keep your audio data in integer format...
Here small C++/VCL example of these ideas:
//$$---- Form CPP ----
//---------------------------------------------------------------------------
#include <vcl.h>
#include <math.h>
#pragma hdrstop
#include "win_main.h"
//---------------------------------------------------------------------------
#pragma package(smart_init)
#pragma resource "*.dfm"
TForm1 *Form1;
//---------------------------------------------------------------------------
//---------------------------------------------------------------------------
int xs,ys; // screen resolution
Graphics::TBitmap *bmp; // back buffer bitmap for rendering
//---------------------------------------------------------------------------
// input data
const int samples=1024;
int sample[samples];
// mipmas max 32 resolutions -> 2^32 samples input
int *mmdat0[32]={NULL}, // min
*mmdat1[32]={NULL}, // max
mmsiz[32]={0}; // resolution
//---------------------------------------------------------------------------
void generate_input(int *data,int size)
{
int i; float a,da;
da=10.0*M_PI/float(size-1);
for (a=0.0,i=0;i<size;i++,a+=da)
{
data[i]=float(100.0*sin(a))+Random(40)-20;
}
}
//---------------------------------------------------------------------------
void mipmap_free()
{
// free allocated mipmaps if needed
if (mmdat0[0]) delete[] mmdat0[0];
mmdat0[0]=NULL;
mmdat1[0]=NULL;
mmsiz[0]=0;
}
//---------------------------------------------------------------------------
void mipmap_compute(int *data,int size)
{
int i,j,k,n,N,a,a0,a1;
mipmap_free();
for (N=0,n=size;n;N+=n,n>>=1); // compute siz of all mipmas together
mmdat0[0]=new int[N+N]; // allocate space for all mipmas as single 1D array
mmdat1[0]=mmdat0[0]+N; // max will be at the other half
mmsiz [0]=size;
for (i=1,n=size;n;n>>=1,i++) // and just set pointers of sub mipmas
{
mmdat0[i]=mmdat0[i-1]+n; // to point at the the right place
mmdat1[i]=mmdat1[i-1]+n; // to point at the the right place
mmsiz [i]=mmsiz [i-1]>>1; // and set resolution as half
}
// copy first mipmap
n=size;
for (i=0;i<mmsiz[0];i++)
{
a=data[i];
mmdat0[0][i]=a;
mmdat1[0][i]=a;
}
// process all resolutions
for (k=1;mmsiz[k];k++)
{
// halve resolution
for (i=0,j=0;i<mmsiz[k];i++)
{
a=mmdat0[k-1][j]; a0=a;
a=mmdat1[k-1][j]; j++; a1=a;
a=mmdat0[k-1][j]; if (a0>a) a0=a;
a=mmdat1[k-1][j]; j++; if (a1<a) a1=a;
mmdat0[k][i]=a0;
mmdat1[k][i]=a1;
}
}
}
//---------------------------------------------------------------------------
void draw() // just render of my App
{
bmp->Canvas->Brush->Color=clWhite;
bmp->Canvas->FillRect(TRect(0,0,xs,ys));
int ix,x,y,y0=ys>>1;
// plot input data
bmp->Canvas->Pen->Color=clBlack;
x=0; y=y0-sample[x];
bmp->Canvas->MoveTo(x,y);
for (x=1;x<xs;x++)
{
y=y0-sample[x];
bmp->Canvas->LineTo(x,y);
}
// plot mipmap[ix] input data
ix=1;
bmp->Canvas->Pen->Color=clBlue;
x=0; y=y0-sample[x];
bmp->Canvas->MoveTo(x,y);
for (x=0;x<mmsiz[ix];x++)
{
y=y0-mmdat0[ix][x];
bmp->Canvas->LineTo(x,y);
y=y0-mmdat1[ix][x];
bmp->Canvas->LineTo(x,y);
}
Form1->Canvas->Draw(0,0,bmp);
// bmp->SaveToFile("out.bmp");
}
//---------------------------------------------------------------------------
__fastcall TForm1::TForm1(TComponent* Owner):TForm(Owner) // init of my app
{
// init backbuffer
bmp=new Graphics::TBitmap;
bmp->HandleType=bmDIB;
bmp->PixelFormat=pf32bit;
generate_input(sample,samples);
mipmap_compute(sample,samples);
}
//---------------------------------------------------------------------------
void __fastcall TForm1::FormDestroy(TObject *Sender) // not important just destructor of my App
{
mipmap_free();
delete bmp;
}
//---------------------------------------------------------------------------
void __fastcall TForm1::FormResize(TObject *Sender) // not important just resize event
{
xs=ClientWidth;
ys=ClientHeight;
bmp->Width=xs;
bmp->Height=ys;
draw();
}
//-------------------------------------------------------------------------
void __fastcall TForm1::FormPaint(TObject *Sender) // not important just repaint event
{
draw();
}
//---------------------------------------------------------------------------
Ignore the window VCL and rendering related stuff (I just wanted to pass whole source so you can see how it is used). The important is only the function mipmap_compute which converts your input data to 2 mipmaps. One is holding min values and the other max values.
The dynamic allocatins are not important the only important code chunk is marked with comment:
// process all resolutions
Where for each mipmap there is only single for loop without any expensive operations. If your platform is better with branchless code you can compute the min,max using in-build brunchless functions min,max. Something like:
// process all resolutions
for (k=1;mmsiz[k];k++)
{
// halve resolution
for (i=0,j=0;i<mmsiz[k];i++)
{
a=mmdat0[k-1][j]; a0=a;
a=mmdat1[k-1][j]; j++; a1=a;
a=mmdat0[k-1][j]; a0=min(a0,a);
a=mmdat1[k-1][j]; j++; a1=max(a1,a);
mmdat0[k][i]=a0;
mmdat1[k][i]=a1;
}
}
This can be further optimized simply by using pointer to actually selected mipmaps that will get rid of the [k] and [k-1] indexes allowing one less memory access per each element access.
// process all resolutions
for (k=1;mmsiz[k];k++)
{
// halve resolution
int *p0=mmdat0[k-1];
int *p1=mmdat1[k-1];
int *q0=mmdat0[k];
int *q1=mmdat1[k];
for (i=0,j=0;i<mmsiz[k];i++)
{
a=p0[j]; a0=a;
a=p1[j]; j++; a1=a;
a=p0[j]; a0=min(a0,a);
a=p1[j]; j++; a1=max(a1,a);
q0[i]=a0;
q1[i]=a1;
}
}
Now all you need is to bilinearly interpolate between 2 mipmaps to achieve your resolution, here small example for this:
// actually rescaled output
int out0[samples]; // min
int out1[samples]; // max
int outs=0; // size
void resize(int n) // compute out0[n],out1[n] from mipmaps
{
int i,*p0,*p1,*q0,*q1,pn,qn;
int pc,qc,pd,qd,pi,qi;
int a,a0,a1,b0,b1,bm,bd;
for (i=0;mmsiz[i]>=n;i++); // find smaller resolution
pn=mmsiz[i];
p0=mmdat0[i];
p1=mmdat1[i]; i--;
qn=mmsiz[i]; // bigger or equal resolution
q0=mmdat0[i];
q1=mmdat1[i]; outs=n;
pc=0; pi=0;
qc=0; qi=0;
bm=n-pn; bd=qn-pn;
for (i=0;i<n-1;i++)
{
// bilinear interpolation (3x linear)
a0=q0[qi];
a1=q0[qi+1];
b1=a0+(((a1-a0)*qc)/n);
a0=p0[pi];
a1=p0[pi+1];
b0=a0+(((a1-a0)*pc)/n);
out0[i]=b0+(((b1-b0)*bm)/bd); // /bd might be bitshift right by log2(bd)
// bilinear interpolation (3x linear)
a0=q1[qi];
a1=q1[qi+1];
b1=a0+(((a1-a0)*qc)/n);
a0=p1[pi];
a1=p1[pi+1];
b0=a0+(((a1-a0)*pc)/n);
out1[i]=b0+(((b1-b0)*bm)/bd); // /bd might be bitshift right by log2(bd)
// DDA increment indexes
pc+=pn; while (pc>=n){ pi++; pc-=n; } // pi = (i*pn)/n
qc+=qn; while (qc>=n){ qi++; qc-=n; } // qi = (i*qn)/n
}
out0[n-1]=q0[pn-1];
out1[n-1]=q1[pn-1];
}
Beware target size n must be less or equal to highest mipmap resolution...
This is how it looks (when I change the resolution manually with mouse wheel):
The choppyness is caused by GIF grabber ... the scaling is fast and seamless in real.
I had a similar problem, with 1.800.000 points of a waveform to draw on an 800 points screen. The zoom factor was 2000. If someone is interested, that's how I got awesome results :
Divide the very long list into 400 smaller lists
For each smaller list calculate the biggest difference, between the smaller and larger value in that list.
Plot 2 points per list, one at (offset + delta / 2) and one at (offset - delta / 2)
Results :
from 453932 points to 800 points
Python code :
numberOfSmallerList = 400
small_list_len = int(len(big_list) / numberOfSmallerList)
finalPointsToPlot = []
for i in range(0, len(big_list), small_list_len):
biggestDiff = max(big_list[i:i+small_list_len]) -
min(big_list[i:i+small_list_len])
finalPointsToPlot.append(biggestDiff/2 + 100)
finalPointsToPlot.append(100 - biggestDiff/2)
import matplotlib.pyplot as plt
plt.plot(finalPointsToPlot)
plt.show()

How to stop blue lines from appearing on a Windows console during an ASCII game?

I've run into a very unusual problem. So unusual in fact, that I haven't found even a mention of anything like it on google. Here's the deal. I was working on making a ASCII based computer game for a programming class. I had created a simple program that moved a one character rectangle around the screen when you pressed the four arrow keys. But, I quickly noticed that when the rectangle moved right, it left a trail of vertical blue lines in it's wake. I have absolutely no idea why this is happening, and as mentioned, google doesn't seem to have the answers. So I'm wondering if there is anyway to fix this problem. If you want some technical information, I am running this on Windows 10 with Dev C++.
#include <ctime>
#include <iostream>
#include <windows.h>
void writeToConsole(char chr, COORD pos) {
static const HANDLE hOut = GetStdHandle(STD_OUTPUT_HANDLE);
std::cout.flush();
// makes sure not changing cursor during cout
SetConsoleCursorPosition(hOut, pos);
// sets where next characters are printed
std::cout << chr;
}
int main(){
COORD squareXY = {0, 0};
const int width = 30;
for (int i = 0; i < width; i++)
// prints spaces that will be overwritten
std::cout << ' ';
writeToConsole((char)219, squareXY);
// (char)219 is a solid rectangle
int lastTimeMove = clock();
// time measured in clocks, not second
while (true) {
if ((clock() - lastTimeMove) > .1 * CLOCKS_PER_SEC) {
// trigger approximately every tenth of a second
writeToConsole(' ', squareXY);
// erase previous
squareXY.X = (squareXY.X + 1) % width;
writeToConsole((char)219, squareXY);
lastTimeMove = clock();
}
}
}
UPDATE:
I found someone having a similar problem with the windows WriteConsoleOutput function. I changed my code to use that function, and now I have red and blue lines. Here are some screen shots:
Blue
Blue & Red

Drawing image(PGraphics) gives unwanted double image mirrored about x-axis. Processing 3

The code is supposed to fade and copy the window's image to a buffer f, then draw f back onto the window but translated, rotated, and scaled. I am trying to create an effect like a feedback loop when you point a camera plugged into a TV at the TV.
I have tried everything I can think of, logged every variable I could think of, and still it just seems like image(f,0,0) is doing something wrong or unexpected.
What am I missing?
Pic of double image mirror about x-axis:
PGraphics f;
int rect_size;
int midX;
int midY;
void setup(){
size(1000, 1000, P2D);
f = createGraphics(width, height, P2D);
midX = width/2;
midY = height/2;
rect_size = 300;
imageMode(CENTER);
rectMode(CENTER);
smooth();
background(0,0,0);
fill(0,0);
stroke(255,255);
}
void draw(){
fade_and_copy_pixels(f); //fades window pixels and then copies pixels to f
background(0,0,0);//without this the corners dont get repainted.
//transform display window (instead of f)
pushMatrix();
float scaling = 0.90; // x>1 makes image bigger
float rot = 5; //angle in degrees
translate(midX,midY); //makes it so rotations are always around the center
rotate(radians(rot));
scale(scaling);
imageMode(CENTER);
image(f,0,0); //weird double image must have something not working around here
popMatrix();//returns window matrix to normal
int x = mouseX;
int y = mouseY;
rectMode(CENTER);
rect(x,y,rect_size,rect_size);
}
//fades window pixels and then copies pixels to f
void fade_and_copy_pixels(PGraphics f){
loadPixels(); //load windows pixels. dont need because I am only reading pixels?
f.loadPixels(); //loads feedback loops pixels
// Loop through every pixel in window
//it is faster to grab data from pixels[] array, so dont use get and set, use this
for (int i = 0; i < pixels.length; i++) {
//////////////FADE PIXELS in window and COPY to f:///////////////
color p = pixels[i];
//get color values, mask then shift
int r = (p & 0x00FF0000) >> 16;
int g = (p & 0x0000FF00) >> 8;
int b = p & 0x000000FF; //no need for shifting
// reduce value for each color proportional
// between fade_amount between 0-1 for 0 being totallty transparent, and 1 totally none
// min is 0.0039 (when using floor function and 255 as molorModes for colors)
float fade_percent= 0.005; //0.05 = 5%
int r_new = floor(float(r) - (float(r) * fade_percent));
int g_new = floor(float(g) - (float(g) * fade_percent));
int b_new = floor(float(b) - (float(b) * fade_percent));
//maybe later rewrite in a way to save what the difference is and round it differently, like maybe faster at first and slow later,
//round doesn't work because it never first subtracts one to get the ball rolling
//floor has a minimum of always subtracting 1 from each value each time. cant just subtract 1 ever n loops
//keep a list of all the pixel as floats? too much memory?
//ill stick with floor for now
// the lowest percent that will make a difference with floor is 0.0039?... because thats slightly more than 1/255
//shift back and or together
p = 0xFF000000 | (r_new << 16) | (g_new << 8) | b_new; // or-ing all the new hex together back into AARRGGBB
f.pixels[i] = p;
////////pixels now copied
}
f.updatePixels();
}
This is a weird one. But let's start with a simpler MCVE that isolates the problem:
PGraphics f;
void setup() {
size(500, 500, P2D);
f = createGraphics(width, height, P2D);
}
void draw() {
background(0);
rect(mouseX, mouseY, 100, 100);
copyPixels(f);
image(f, 0, 0);
}
void copyPixels(PGraphics f) {
loadPixels();
f.loadPixels();
for (int i = 0; i < pixels.length; i++) {
color p = pixels[i];
f.pixels[i] = p;
}
f.updatePixels();
}
This code exhibits the same problem as your code, without any of the extra logic. I would expect this code to show a rectangle wherever the mouse is, but instead it shows a rectangle at a position reflected over the X axis. If the mouse is on the top of the window, the rectangle is at the bottom of the window, and vice-versa.
I think this is caused by the P2D renderer being OpenGL, which has an inversed Y axis (0 is at the bottom instead of the top). So it seems like when you copy the pixels over, it's going from screen space to OpenGL space... or something. That definitely seems buggy though.
For now, there are two things that seem to fix the problem. First, you could just use the default renderer instead of P2D. That seems to fix the problem.
Or you could get rid of the for loop inside the copyPixels() function and just do f.pixels = pixels; for now. That also seems to fix the problem, but again it feels pretty buggy.
If somebody else (paging George) doesn't come along with a better explanation by tomorrow, I'd file a bug on Processing's GitHub. (I can do that for you if you want.)
Edit: I've filed an issue here, so hopefully we'll hear back from a developer in the next few days.
Edit Two: Looks like a fix has been implemented and should be available in the next release of Processing. If you need it now, you can always build Processing from source.
An easier one, and works like a charm:
add f.beginDraw(); before and f.endDraw(); after using f:
loadPixels(); //load windows pixels. dont need because I am only reading pixels?
f.loadPixels(); //loads feedback loops pixels
// Loop through every pixel in window
//it is faster to grab data from pixels[] array, so dont use get and set, use this
f.beginDraw();
and
f.updatePixels();
f.endDraw();
Processing must know when it's drawing in a buffer and when not.
In this image you can see that works

finding the skin tone percentage of a person from an image (not a web cam feed ) ignoring the background

I need to find the percentage of skintone of a person in a given image.
I have been able to count all the pixels with skin colour so far but I am having trouble ignoring the background of the person so I can count the number of pixels for the percentage.
BackgroundSubtractorMOG2 bg;
bg.nmixtures =3;
bg.bShadowDetection=false;
bg.operator ()(img,fore);
bg.getBackgroundImage(back);
img is my image. I was trying to separate the back and fore mat objects, but with the above code snippet back and fore take the same value as the img. Nothing is happening.
Can you point me in the right direction as to what changes I have to make to get it right?
I was able to run some similar code found here:
http://mateuszstankiewicz.eu/?p=189
I had to change a couple of things, but it ended up working properly (back and fore are not the same as img when displayed:
int main(int argc, char *argv[]) {
Mat frame, back, fore;
VideoCapture cap(0);
BackgroundSubtractorMOG2 bg;
vector<std::vector<Point> > contours;
namedWindow("Frame");
namedWindow("Background");
namedWindow("Foreground");
for(;;) {
cap >> frame;
bg.operator ()(frame, fore);
bg.getBackgroundImage(back);
erode(fore, fore, Mat());
dilate(fore, fore, Mat());
findContours(fore, contours, CV_RETR_EXTERNAL, CV_CHAIN_APPROX_NONE);
drawContours(frame, contours, -1, Scalar(0, 0, 255), 2);
imshow("Frame", frame);
imshow("Background", back);
imshow("Foreground", fore);
if(waitKey(1) == 27) break;
}
return 0;
}

OpenGL GLUT window very slow, why?

The problem
I have just now begun working with OpenGL using GLUT. The code below compiles and displays two wireframe cubes and a sphere. The problem is that when I attempt to drag or resize the window it induces a noticeable delay before following my mouse.
This problem does not occur on my colleague's computer, same code.
I am working with Visual Studio 2012 c++ express on a Windows 7 computer.
I am a not an experienced programmer.
The code
// OpenGLHandin1.cpp : Defines the entry point for the console application.
//
#include "stdafx.h"
#include <GL/glut.h>
void initView(int argc, char * argv[]){
//init here
glutInit(&argc, argv);
//Simple buffer
glutInitDisplayMode( GLUT_SINGLE | GLUT_RGBA );
glutInitWindowPosition(100,100);
glutInitWindowSize(800,400);
glutCreateWindow("Handin 2");
}
void draw(){
glClearColor(0,0,0,1);
glClear(GL_COLOR_BUFFER_BIT);
//Background color
glPushMatrix();
glLoadIdentity();
glTranslatef(0.6, 0, 0);
glColor3f(0.8,0,0);
glutWireCube(1.1); //Draw the cube
glPopMatrix();
glPushMatrix();
glLoadIdentity();
glTranslatef(-0.5, 0, -0.2);
glColor3f(0,0.8,0);
glutWireCube(1.1); //Draw the cube
glPopMatrix();
glPushMatrix();
glLoadIdentity();
glTranslatef(0, 1.2, 0);
glRotatef(90, 1, 0, 0);
glColor3f(1,1,1);
glutWireSphere(0.6, 20, 20); //Draw the sphere
glPopMatrix();
//draw here
//glutSwapBuffers();
glutPostRedisplay();
glFlush();
}
void reshape (int w, int h){
glViewport(0,0,w ,h);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
gluPerspective(45, (float)w/(float)h, 1.5, 10);
gluLookAt(1.5, 2.5, 4,
0, 0.6, 0,
0, 1, 0); //Orient the camera
glRotatef(5, 0, 0, 1);
glMatrixMode(GL_MODELVIEW);
}
int main(int argc, char * argv[])
{
initView(argc,argv);
glutDisplayFunc(draw);
glutReshapeFunc(reshape);
glutMainLoop();
}
Solution:
It seems that the simple solution using Sleep(1) in the render function worked. You've also asked why - I'm not sure I will be able to solve this properly, but here's my best guess:
Why does it even work?
Your fellow students can have VSync turned on by default in their drivers. This causes their code to run only as fast as the screen can refresh, most probably 60 fps. It gives you around 16 miliseconds to render the frame, and if the code is efficient (taking, say, 2 ms for render) it leaves plenty of time for the CPU to do other OS-related stuff, such as moving your window.
Now, if you disable vertical sync, the program will try to render as many frames as possible, effectively clogging all other processes. I've suggested you to use Sleep, because it reveals this one particular issue. It doesn't really matter if it's 1 or 3 ms, what it really does is say "hey, CPU, I'm not doing anything in particular right now, so you may do other things".
But isn't it slowing my program?
Using Sleep is a common technique. If you're concerned with that lost 1 ms every frame, you can also try putting Sleep(0), as it should act exactly the same - giving the spare time to the CPU. You could also try enabling vertical sync and verifying if indeed my guess was correct.
As a side note, you can also look at CPU usage graphs with and without sleep. It should be 100% (or 50% on a dual-core CPU) without (running as fast as possible), and much lower with, depending on your program requirements and your CPU's speed.
Additional remarks about Sleep(0)
After the sleep interval has passed, the thread is ready to run. If you specify 0 milliseconds, the thread will relinquish the remainder of its time slice but remain ready. Note that a ready thread is not guaranteed to run immediately. Consequently, the thread may not run until some time after the sleep interval elapses. - it's from here.
Also note that on Linux systems behavior might be slightly different; but I'm not a linux expert; perhaps a passer-by could clarify.

Resources