I'm creating a network animator (similar to nam, if you have used it before).
Basically, I have nodes represented as small dots on a GTK+ DrawingArea, and I update the positions of these nodes and redraw the DrawingArea in a loop.
The resulting animation is fast, but not smooth (there's a lot of flicker). This is probably because I fill the DrawingArea with a solid color before each frame.
How do you think I can best tackle this problem? Should I pre-render the frames onto Pixbufs? Is there a better solution?
Here's my current drawing code (using PyGTK):
rect = self.drawing_area.get_allocation()
style = self.drawing_area.get_style()
pos = [n.position_at(self.t) for n in self.nodes]
self.drawing_area.window.draw_rectangle(style.bg_gc[gtk.STATE_NORMAL], True,
0, 0, rect.width, rect.height)
for p in pos:
self.drawing_area.window.draw_arc(style.fg_gc[gtk.STATE_NORMAL], True,
rect.width * (p.x / 2400.0) - NODE_SIZE/2,
rect.height * (p.y / 2400.0) - NODE_SIZE/2,
NODE_SIZE, NODE_SIZE,
0, 64 * 360)
where self.t is the current time, which is incremented in the loop.
I changed my code to render the frames onto a Pixmap, and replaced the DrawingArea with an Image.
While this solved the flickering, now the CPU usage has peaked. The animation is still quite fast, but I don't think this method is scalable.
Time for some optimization, I guess.
UPDATE: It turns out using expose-event with an Image wasn't such a good idea. CPU usage is back to normal.
About the expose-event handling, check out the first paragraph on Animations with Cairo + Gtk:P
Multi-threaded Animation with Cairo and GTK+
Complex animations with
cairo and GTK+ can result in a laggy interface. This is because the
gtk_main() thread runs in a single loop. So, if your do_draw()
function implements a complicated drawing command, and it is called
from the gtk_main() thread (say by an on_window_expose_event()
function), the rest of your gtk code will be blocked until the
do_draw() function finishes. Consequentially, menu items, mouse
clicks, and even close button events will be slow to be processed and
your interface will feel laggy.
One solution is to hand off all the processor-intensive drawing to a
separate thread, thus freeing the gtk_main() thread to respond to
events.
http://cairographics.org/threaded_animation_with_cairo/
Related
I am drawing an animation using double-buffered GDI on a window, on a system where DWM composition is enabled, and seeing clearly visible tearing onscreen. Is there a way to prevent this?
Details
The animation takes the same image, and moves it right to left over the screen; the number of pixels across is determined by the difference between the current time and the time the animation started and the time to end, to get a fraction complete which is applied to the whole window width, using timeGetTime with a 1ms resolution. The animation draws in a loop without processing application messages; it calls the (VCL library) method Repaint which internally invalidates and then calls UpdateWindow for the window in question, directly calling into the message procedure with WM_PAINT. The VCL implementation of the paint handler uses BeginBufferedPaint. Painting is itself double-buffered.
The aim of this is to have as high a frame-rate as possible to get a smooth animation across the screen. (The drawing uses double-buffering to remove flickering and to ensure a whole image or frame is onscreen at any one time. It invalidates and updates directly by calling into the message procedure, without doing other message processing. Painting is implemented using modern techniques (eg BeginBufferedPaint) for Aero composition.) Within this, painting is done in a couple of BitBlt calls (one for the left side of the animation, ie what's moving offscreen, and one for the right side of the animation, ie what's moving onscreen.)
When watching the animation, there is clearly visible tearing. This occurs on Windows Vista, 7 and 8.1 on multiple systems with different graphics cards.
My approach to handle this has been to reduce the rate at which it is drawing, or to try to wait for VSync before painting again. This might be the wrong approach, so the answer to this question might be "Do something else completely: X". If so, great :)
(What I'd really like is a way to ask the DWM to compose / use only fully-painted frames for this specific window.)
I've tried the following approaches, none of which remove all visible tearing. Therefore the question is, Is it possible to avoid tearing when using DWM composition, and if so how?
Approaches tried:
Getting the monitor refresh rate via GetDeviceCaps(Application.MainForm.Handle, VREFRESH); sleeping for 1 / refresh rate milliseconds. Slightly improved over painting as fast as possible, but may be wishful thinking. Perceptually slightly less smooth animation rate. (Tweaks: normal Sleep and a high-resolution spin-wait using timeGetTime.)
Using DwmSetPresentParameters to try to limit updating to the same rate at which the code draws. (Variations: lots of buffers (cBuffer = 8) (no visible effect); specifying a source rate of monitor refresh rate / 1 and sleeping using the above code (the same as just trying the sleeping approach); specifying a refresh per frame of 1, 10, etc (no visible effect); changing the source frame coverage (no visible effect.)
Using DwmGetCompositionTimingInfo in a variety of ways:
While cFramesPending > 0, spin;
Get cFrame (frame composed) and spin while this number doesn't change;
Get cFrameDisplayed and spin while this doesn't change;
Calculating a time to sleep to by adding qpcVBlank + qpcRefreshPeriod, and then while QueryPerformanceCounter returns a time less than this, spin
All these approaches have also been varied by painting, then spinning/sleeping before painting again; or the reverse: sleeping and then painting.
Few seem to have any visible effect and what effect there is is hard to qualify and may just be a result of a lower frame rate. None prevent tearing, ie none make the DWM compose the window with a "whole" copy of the contents of the window's DC.
Advice appreciated :)
Since you're using BitBlt, make sure your DIBs are 4-bytes / pixel. With 3 bytes / pixel, GDI is horribly slow while DWM is running, that could be the source of your tearing. Another BitBlt issue I've run into, if your DIB is somewhat larger, than the BitBlt call make take an unexpectedly long time. If you split up one call into smaller calls than only draw a portion of the data, it might help. Both of these items helped me for my case, only because BitBlt itself was running too slow, thus leading to video artifacts.
I'm making a 2D sidescrolling space shooter-type game, where I need a background that can be scrolled infintely (it is tiled or wrapped repeatedly). I'd also like to implement parallax scrolling, so perhaps have one lowest background nebula texture that barely moves, a higher one containing far-away stars that barely moves and the highest background containing close stars that moves a lot.
I see from google that I'd have each layer move 50% less than the layer above it, but how do I implement this in libgdx? I have a Camera that can be zoomed in and out, and in the physical 800x480 screen could show anything from 128x128 pixels (a ship) to a huge area of space featuring the textures wrapped multiple times on their edges.
How do I continuosly wrap a smaller texture (say 512x512) as if it were infinitely tiled (for when the camera is zoomed right out), and then how do I layer multiple textures like these, keep them together in a suitable structure (is there one in the libgdx api?) and move them as the player's coords change? I've looked at the javadocs and the examples but can't find anything like this problem, apologies if it's obvious!
Hey I am also making a parrallax background and trying to get it to scroll.
There is a ParallaxTest.java in the repository, it can be found here.
this file is a standalone class, so you will need to incorporate it into your game how you want. and you will need to change the control input since its hooked up to use touch screen/mouse.
this worked for me. as for repeated bg, i havent gotten that far yet, but i think you just need to basic logic as in, ok one screen away from the end, change the first few screens pos to line up at the end.
I have not much more to say regarding to the Parallax Scrolling than PFG already did. There is indeed an example in the repository under the test folder and several explanations around the web. I liked this one.
The matter with the background is really easy to solve. This and other related problems can be approached by using modular algebra. I won't go into the details because once shown is very easy to understand.
Imagine that you want to show a compass in your screen. You have a texture 1024x16 representing the cardinal points. Basically all you have is a strip. Letting aside the considerations about the real orientation and such, you have to render it.
Your viewport is 300x400 for example, and you want 200px of the texture on screen (to make it more interesting). You can render it perfectly with a single region until you reach the position (1024-200) = 824. Once you're in this position clearly there is no more texture. But since it is a compass, it's obvious that once you reach the end of it, it has to start again. So this is the answer. Another texture region will do the trick. The range 825-1023 has to be represented by another region. The second region will have a size of (1024-pos) for every value pos>824 && pos<1024
This code is intended to work as real example of a compass. It's very dirty since it works with relative positions all the time due to the conversion between the range (0-3.6) to (0-1024).
spriteBatch.begin();
if (compassorientation<0)
compassorientation = (float) (3.6 - compassorientation%3.6);
else
compassorientation = (float) (compassorientation % 3.6);
if ( compassorientation < ((float)(1024-200)/1024*3.6)){
compass1.setRegion((int)(compassorientation/3.6*1024), 0, 200, 16);
spriteBatch.draw(compass1, 0, (Gdx.graphics.getHeight()/2) -(-250 + compass1.getTexture().getHeight()* (float)1.2), Gdx.graphics.getWidth(), 32 * (float)1.2);
}
else if (compassorientation > ((float)(1024-200)/1024*3.6)) {
compass1.setRegion((int)(compassorientation/3.6*1024), 0, 1024 - (int)(compassorientation/3.6*1024), 16);
spriteBatch.draw(compass1, 0, (Gdx.graphics.getHeight()/2) -(-250 + compass1.getTexture().getHeight()* (float)1.2), compass1.getRegionWidth()/200f * Gdx.graphics.getWidth() , 32 * (float)1.2);
compass2.setRegion(0, 0, 200 - compass1.getRegionWidth(), 16);
spriteBatch.draw(compass2, compass1.getRegionWidth()/200f * Gdx.graphics.getWidth() , (Gdx.graphics.getHeight()/2) -(-250 + compass1.getTexture().getHeight()* (float)1.2), Gdx.graphics.getWidth() - (compass1.getRegionWidth()/200f * Gdx.graphics.getWidth()) , 32 * (float)1.2);
}
spriteBatch.end();
You can use setWrap function like below:
Texture texture = new Texture(Gdx.files.internal("images/background.png"));
texture.setWrap(Texture.TextureWrap.Repeat, Texture.TextureWrap.Repeat);
It will draw background repeatedly! Hope this help!
Beneath where you initialize your Texture for the object. Then beneath that type in this
YourTexture.setWrap(Texture.TextureWrap.Repeat, Texture.TextureWrap.Repeat);
Where YourTexture is your texture that you want to parallax scroll.
In Your render file type in this code.
batch.draw(YourTexture,0, 0, 0 , srcy, Gdx.graphics.getWidth(),
Gdx.graphics.getHeight());
srcy +=10;
It is going to give you an error so make a variable called srcy. It is nothing too fancy.
Int srcy
I used the code from How to make an OpenGL rendering context with transparent background? to create a window with transparent background. My problem is that the frame rate is very low - I have around 20 frames/sec even when I draw one quad(made from 2 triangles). I tried to find out why and glFlush() takes around 0.047 seconds. Do you have any idea why? Same thing is rendered in a window that does not have transparent background at 6000 fps(when I remove 60 fps limitation). It also takes one core to 100%. I test it on a Q9450#2.66GHz with ATI Radeon 4800 using Win7.
I think you can't get good performances this way, In the example linked there is the following code
void draw(HDC pdcDest)
{
assert(pdcDIB);
verify(BitBlt(pdcDest, 0, 0, w, h, pdcDIB, 0, 0, SRCCOPY));
}
BitBlt is a function executed on the processor, whereas the OpenGL functions are executed by the GPU. So the rendered data from the GPU as to crawl back to the main memory, and effectively the bandwidth from the GPU to the CPU is somewhat limited (even more because data as to go back there once BitBlt'ed).
If you really want transparent window with rendered content, you might want to look at Direct2D and/or Direct3D, maybe there is some way to do that without the performance penalty of data moving.
I have my app running on my iPad. but it is performing very badly -- I am getting below 15fps. can anyone help me to optimise?
It is basically a wheel (derived from UIView) containing 12 buttons (derived from UIControl).
As the user spins it, the buttons dynamically expand and contract (e.g. the one at the 12 o'clock position should always be the biggest)
So my wheel contains a:
- (void) displayLinkIsCallingBack: (CADisplayLink *) dispLink
{
:
// using CATransaction like this goes from 14fps to 19fps
[CATransaction begin];
[CATransaction setDisableActions: YES];
// NEG, as coord system is flipped/fucked
self.transform = CGAffineTransformMakeRotation(-thetaWheel);
[CATransaction commit];
if (BLA)
[self rotateNotch: direction];
}
… which calculates from recent touch input the new rotation for the wheel. There is already one performance issue here which I am pursuing on a separate thread: iOS Core-Animation: Performance issues with CATransaction / Interpolating transform matrices
This routine also checks whether the wheel has completed another 1/12 rotation, and if so instructs all 12 buttons to resize:
// Wheel.m
- (void) rotateNotch: (int) direction
{
for (int i=0; i < [self buttonCount] ; i++)
{
CustomButton * b = (CustomButton *) [self.buttons objectAtIndex: i];
// Note that b.btnSize is a dynamic property which will calculate
// the desired button size based on the button index and the wheels rotation.
[b resize: b.btnSize];
}
}
Now for the actual resizing code, in button.m:
// Button.m
- (void) scaleFrom: (float) s_old
to: (float) s_new
time: (float) t
{
CABasicAnimation * scaleAnimation = [CABasicAnimation animationWithKeyPath: #"transform.scale"];
[scaleAnimation setDuration: t ];
[scaleAnimation setFromValue: (id) [NSNumber numberWithDouble: s_old] ];
[scaleAnimation setToValue: (id) [NSNumber numberWithDouble: s_new] ];
[scaleAnimation setTimingFunction: [CAMediaTimingFunction functionWithName: kCAMediaTimingFunctionEaseOut] ];
[scaleAnimation setFillMode: kCAFillModeForwards];
scaleAnimation.removedOnCompletion = NO;
[self.contentsLayer addAnimation: scaleAnimation
forKey: #"transform.scale"];
if (self.displayShadow && self.shadowLayer)
[self.shadowLayer addAnimation: scaleAnimation
forKey: #"transform.scale"];
size = s_new;
}
// - - -
- (void) resize: (float) newSize
{
[self scaleFrom: size
to: newSize
time: 1.];
}
I wonder if the problem is related to the overhead of multiple transform.scale operations queueing up -- each button resize takes a full second to complete, and if I am spinning the wheel fast I might spin a couple of revolutions per second; that means that each button is getting resized 24 times per second.
** creating the button's layer **
The final piece of the puzzle I guess is to have a look at the button's contentsLayer. but I have tried
contentsLayer.setRasterize = YES;
which should effectively be storing it as a bitmap. so with the setting the code is in effect dynamically resizing 12 bitmaps.
I can't believe this is taxing the device beyond its limits. however, core animation instrument tells me otherwise; while I am rotating the wheel (by dragging my finger in circles), it is reporting ~15fps.
This is no good: I eventually need to put a text layer inside each button and this is going to drag performance down further (...unless I am using the .setRasterize setting above, in which case it should be the same).
There must be something I'm doing wrong! but what?
EDIT: here is the code responsible for generating the button content layer (ie the shape with the shadow):
- (CALayer *) makeContentsLayer
{
CAShapeLayer * shapeOutline = [CAShapeLayer layer];
shapeOutline.path = self.pOutline;
CALayer * contents = [CALayer layer];
// get the smallest rectangle centred on (0,0) that completely contains the button
CGRect R = CGRectIntegral(CGPathGetPathBoundingBox(self.pOutline));
float xMax = MAX(abs(R.origin.x), abs(R.origin.x+R.size.width));
float yMax = MAX(abs(R.origin.y), abs(R.origin.y+R.size.height));
CGRect S = CGRectMake(-xMax, -yMax, 2*xMax, 2*yMax);
contents.bounds = S;
contents.shouldRasterize = YES; // try NO also
switch (technique)
{
case kMethodMask:
// clip contents layer by outline (outline centered on (0, 0))
contents.backgroundColor = self.clr;
contents.mask = shapeOutline;
break;
case kMethodComposite:
shapeOutline.fillColor = self.clr;
[contents addSublayer: shapeOutline];
self.shapeLayer = shapeOutline;
break;
default:
break;
}
if (NO)
[self markPosition: CGPointZero
onLayer: contents ];
//[self refreshTextLayer];
//[contents addSublayer: self.shapeLayerForText];
return contents;
}
as you can see, I'm trying every possible approach, I am trying two methods for making the shape, and separately I am toggling .shouldRasterize
** compromising the UI design to get tolerable frame rate **
EDIT: Now I have tried disabling the dynamic resizing behaviour until the wheel settles into a new position, and setting wheel.setRasterize = YES. so it is effectively spinning a single prerendered UIView (which is admittedly taking up most of the screen) underneath the finger (which it happily does #~60fps), until the wheel comes to rest, at which point it performs this laggy resizing animation (#<20fps).
while this gives a tolerable result, it seems nuts that I am having to sacrifice my UI design in such a way. I feel sure I must be doing something wrong.
EDIT: I have just tried as an experiment to resize buttons manually; ie put a display link callback in each button, and dynamically calculate the expected size of this given frame, explicitly disable animations with CATransaction the same as I did with the wheel, set the new transformation matrix (scale transform generated from the expected size). added to this I have set the buttons content layer shouldRasterize = YES. so it should be simply scaling 12 bitmaps each frame onto a single UIView which is itself rotating. amazingly this is dead slow, it is even bringing the simulator to a halt. It is definitely 10 times slower than doing it automatically using core animation's animation feature.
I have no experience in developing iPad applications but I do have some in optimizing video games. So, I cannot give an exact answer but I want to give some tips in optimization.
Do not guess. Profile it.
It seems you are trying to make changes without profiling the code. Changing some suspicious code and crossing your fingers does not really work. You should profile your code by examining how long each task takes and how often they need to run. Try to break down your tasks and put profiling code to measure time and frequency. It's even better if you can measure how much memory are used for each task, or how many other system resources. Find your bottleneck based an evidence, not your feeling.
For your specific problem, you think the program gets really slow when your resizing work kicks in. But, are you sure? It could be something else. We don't know for sure until we have actual profiling data.
Minimize problematic area and measure real performance before making changes.
After profiling, you have a candidate for your bottleneck. If you can still split the cause to several small tasks, do it and go to profile them until you cannot split it anymore. Then, try to measure their precise performance by running them repeatedly like a few thousand times. This is important because you need to know the performance (speed & space) before making any changes so that you can compare it to future changes.
For your specific problem, if resizing is really the issue, try to examine how it performs. How long it takes to perform one resize call? How often you need to do resize work to complete your job?
Improve it. How? It depends.
Now, you have the problematic code block and its current performance. You now have to improve it. How? well, it really depends on what the problem is. you could search for better algorithms, you could do lazy fetching if you can delay calculations until you really need to perform, or you could do over-eager evaluation by caching some data if you are doing the same operation too frequently. The better you know the cause, the easier you can improve it.
For your specific problem, it might be just the limit of OS ui function. Usually, resizing button is not just resizing button itself but it also invalidates a whole area or its parent widget so that every ui stuff can be rendered properly. Resizing button could be expensive and if that's the case, you could resolve the issue by simply using image-based approach instead of OS ui-based system. You could try to use OpenGL if image operations from OS API are not enough. But, again, we don't know until we actually try them out and profile these alternatives. Good luck! :)
Try it without your shadows and see if that improves performance. I imagine it will improve it greatly. Then I'd look into using CALayer's shadowpath for rendering shadows. That will greatly improve shadow rendering performance.
Apple's Core Animation videos from last year's WWDC have a lot of great info on increasing performance in core animation.
By the way, I'm animating something way more complex then this right now and it works beautifully even on an older iPhone 3G. The hardware/software is quite capable.
I know this question is old, but still up to date. CoreAnimation (CA) is just a wrapper around OpenGL - or meanwhile maybe around Metal. Layers are in fact textures drawn on rectangles and the animations are expressed using 3D transformations. As all of this is handled by the GPU, it should be ultra fast... but it isn't. The whole CA sub-system seems pretty complex and translating between AppKit/UIKit and the 3D world is harder than it seems (if you ever tried to write such a wrapper yourself, you know how hard it can be). To the programmer, CA offers a super simple to use interface but this simplicity comes with a price. All my attempts to optimize very slow CA were futile so far; you can speed it up a bit but at some point you have to reconsider your approach: Either CA is fast enough to does the job for you or you need to stop using CA and either implement all animation yourself using classic view drawing (if the CPU can cope with that) or implement the animations yourself using a 3D API (then the GPU will do it), in which case you can decide how the 3D world interacts with the rest of your app; the price is much more code to write or much more complex API to use, but the results will speak for themselves in the end.
Still, I'd like to give some generic tips about speeding up CA:
Every time you "draw" to a layer or load content into a layer (a new image), the data of the texture backing this layer needs to be updated. Every 3D programmer knows: Updating textures is very expensive. Avoid that at all costs.
Don't use huge layers as if layers are too big to be handled directly by the GPU as a single texture, they are split into multiple textures and this alone makes performance worse.
Don't use too many layers as the amount of memory GPUs can spend on textures is often limited. If you need more memory than that limit, textures are swapped out (removed from GPU memory to make room for other textures, later on added back when they need to drawn). See first tip above, this kind of swapping is very expensive.
Don't redraw things that don't need redrawing, cache into images instead. E.g. drawing shadows and drawing gradients are both utlra expensive and usually rarely ever change. So instead of making CA draw them each time, draw them once to a layer or draw them to an image and load that image to a CALayer, then position the layer where you need it. Monitor when you need to update them (e.g. if the size of an object has changed), then re-draw them once and again cache the result. CA itself also tries to cache results, but you get better results if you control that caching yourself.
Careful with transparency. Drawing an opaque layer is always faster than drawing one that isn't. So avoid using transparency where not needed as the system will not scan all your content for transparency (or the lack of it). If a NSView contains no area where its parent shines through, make a custom subclass and override isOpaque to return YES. Same holds true for UIViews and layers where neither the parent, nor their siblings will ever shine through, but here it is enough to just set the opaque property to YES.
If none of that really helps, you are pushing CA to its limits and you probably need to replace it with something else.
You should probably just re do this in OpenGL
Are you using shadows on your layer for me it was a cause of performance issue, then you have 2 options AFAIK:
- setting shadowPath that way, CA does not have to compute it everytime
- removing shadows and using images to replace
I want to write a paint program in the style of MS Paint.
For painting things on screen when the user moves the mouse, I have to wait for mouse move events and draw on the screen whenever I receive one. Apparently, mose move events are not sent very often, so I have to interpolate the mouse movement by drawing a line between the current mouse position and the previous one. In pseudocode, this looks something like this:
var positionOld = null
def handleMouseMove(positionNew):
if mouse.button.down:
if positionOld == null:
positionOld = positionNew
screen.draw.line(positionOld,positionNew)
positionOld = positionNew
Now my question: interpolating with straight line segments looks too jagged for my taste, can you recommend a better interpolation method? What method do GIMP or Adobe Photoshop implement?
Alternatively, is there a way to increase the frequency of the mouse move events that I receive? The GUI framework I'm using is wxWidgets.
GUI framework: wxWidgets.
(Programming language: Haskell, but that's irrelevant here)
EDIT: Clarification: I want something that looks smoother than straight line segments, see the picture (original size):
EDIT2: The code I'm using looks like this:
-- create bitmap and derive drawing context
im <- imageCreateSized (sy 800 600)
bitmap <- bitmapCreateFromImage im (-1) -- wxBitmap
dc <- memoryDCCreate -- wxMemoryDC
memoryDCSelectObject dc bitmap
...
-- handle mouse move
onMouse ... sw (MouseLeftDrag posNew _) = do
...
line dc posOld posNew [color := white
, penJoin := JoinRound
, penWidth := 2]
repaint sw -- a wxScrolledWindow
-- handle paint event
onPaint ... = do
...
-- draw bitmap on the wxScrolledWindow
drawBitmap dc_sw bitmap pointZero False []
which might make a difference. Maybe my choices of wx-classes is why I'm getting a rather low frequency of mouse move events.
Live demos
version 1 - more smooth, but more changing while you draw: http://jsfiddle.net/Ub7RV/1/
version 2 - less smooth but more stable: http://jsfiddle.net/Ub7RV/2/
The way to go is
Spline interpolation of the points
The solution is to store coordinates of the points and then perform spline interpolation.
I took the solution demonstrated here and modified it. They computed the spline after you stop drawing. I modified the code so that it draws immediately. You might see though that the spline is changing during the drawing. For real application, you probably will need two canvases - one with the old drawings and the other with just the current drawing, that will change constantly until your mouse stops.
Version 1 uses spline simplification - deletes points that are close to the line - which results in smoother splines but produce less "stable" result. Version 2 uses all points on the line and produces much more stable solution though (and computationally less expensive).
You can make them really smooth using splines:
http://freespace.virgin.net/hugo.elias/graphics/x_bezier.htm
But you'll have to delay the drawing of each line segment until one frame later, so that you have the start and end points, plus the next and previous points available for the calculation.
so, as I see the problem of jagged edge of freehand made curve, when the mouse are moved very fast, is not solved !!! In my opinion there are need to work around with the polling frequency of mousemove event in the system i.e. using different mouse driver or smf.. And the second way is the math.. using some kind of algorithm, to accuratly bend the straight line between two points when the mouse event is polled out.. For clear view you can compare how is drawed free hand line in photoshop and how in mspaint.. thanks folks.. ;)
I think you need to look into the Device Context documentation for wxWidgets.
I have some code that draws like this:
//screenArea is a wxStaticBitmap
int startx, starty;
void OnMouseDown(wxMouseEvent& event)
{
screenArea->CaptureMouse();
xstart = event.GetX();
ystart = event.GetY();
event.Skip();
}
void OnMouseMove(wxMouseEvent& event)
{
if(event.Dragging() && event.LeftIsDown())
{
wxClientDC dc(screenArea);
dc.SetPen(*wxBLACK_PEN);
dc.DrawLine(startx, starty, event.GetX(), event.GetY());
}
startx = event.GetX();
starty = event.GetY();
event.Skip();
}
I know it's C++ but you said the language was irrelevant, so I hope it helps anyway.
This lets me do this:
which seems significantly smoother than your example.
Interpolating mouse movements with line segments is fine, GIMP does it that way, too, as the following screenshot from a very fast mouse movement shows:
So, smoothness comes from a high frequency of mouse move events. WxWidgets can do that, as the example code for a related question demonstrates.
The problem is in your code, Heinrich. Namely, drawing into a large bitmap first and then copying the whole bitmap to the screen is not cheap! To estimate how efficient you need to be, compare your problem to video games: a smooth rate of 30 mouse move events per second correspond to 30fps. Copying a double buffer is no problem for modern machines, but WxHaskell is likely not optimized for video games, so it's not surprising that you experience some jitter.
The solution is to draw only as much as necessary, i.e. just the lines, directly on the screen, for example as shown in the link above.
I agree with harviz - the problem isn't solved. It should be solved on the operating system level by recording mouse movements in a priority thread, but no operating system I know of does that. However, the app developer can also work around this operating system limitation by interpolating better than linear.
Since mouse movement events don't always come fast enough, linear interpolation isn't always enough.
I experimented a little bit with the spline idea brought up by Rocketmagnet.
Instead of putting a line between two points A and D, look at the point P preceding A and use a cubic spline with the following control points B = A + v' and C = D - w', where
v = A - P,
w = D - A,
w' = w / 4 and
v' = v * |w| / |v| / 4.
This means we fall into the second point with the same angle as the line interpolation would, but go out a starting point in the same angle the previous segment came in, making the edge smooth. We use the length of the segment for both control point distances to make the size of the bend fit its proportion.
The following picture shows the result with very few data points (indicated in grey).
The sequence starts at the top left and ends in the middle.
There is still some level of uneasiness here which may be alleviated if one uses both the previous and the next point to adjust for both angles, but that would also mean to draw one point less than what one has got. I find this result already satisfactory, so I didn't try.