Writing a paint program à la MS Paint - how to interpolate between mouse move events? - user-interface

I want to write a paint program in the style of MS Paint.
For painting things on screen when the user moves the mouse, I have to wait for mouse move events and draw on the screen whenever I receive one. Apparently, mose move events are not sent very often, so I have to interpolate the mouse movement by drawing a line between the current mouse position and the previous one. In pseudocode, this looks something like this:
var positionOld = null
def handleMouseMove(positionNew):
if mouse.button.down:
if positionOld == null:
positionOld = positionNew
screen.draw.line(positionOld,positionNew)
positionOld = positionNew
Now my question: interpolating with straight line segments looks too jagged for my taste, can you recommend a better interpolation method? What method do GIMP or Adobe Photoshop implement?
Alternatively, is there a way to increase the frequency of the mouse move events that I receive? The GUI framework I'm using is wxWidgets.
GUI framework: wxWidgets.
(Programming language: Haskell, but that's irrelevant here)
EDIT: Clarification: I want something that looks smoother than straight line segments, see the picture (original size):
EDIT2: The code I'm using looks like this:
-- create bitmap and derive drawing context
im <- imageCreateSized (sy 800 600)
bitmap <- bitmapCreateFromImage im (-1) -- wxBitmap
dc <- memoryDCCreate -- wxMemoryDC
memoryDCSelectObject dc bitmap
...
-- handle mouse move
onMouse ... sw (MouseLeftDrag posNew _) = do
...
line dc posOld posNew [color := white
, penJoin := JoinRound
, penWidth := 2]
repaint sw -- a wxScrolledWindow
-- handle paint event
onPaint ... = do
...
-- draw bitmap on the wxScrolledWindow
drawBitmap dc_sw bitmap pointZero False []
which might make a difference. Maybe my choices of wx-classes is why I'm getting a rather low frequency of mouse move events.

Live demos
version 1 - more smooth, but more changing while you draw: http://jsfiddle.net/Ub7RV/1/
version 2 - less smooth but more stable: http://jsfiddle.net/Ub7RV/2/
The way to go is
Spline interpolation of the points
The solution is to store coordinates of the points and then perform spline interpolation.
I took the solution demonstrated here and modified it. They computed the spline after you stop drawing. I modified the code so that it draws immediately. You might see though that the spline is changing during the drawing. For real application, you probably will need two canvases - one with the old drawings and the other with just the current drawing, that will change constantly until your mouse stops.
Version 1 uses spline simplification - deletes points that are close to the line - which results in smoother splines but produce less "stable" result. Version 2 uses all points on the line and produces much more stable solution though (and computationally less expensive).

You can make them really smooth using splines:
http://freespace.virgin.net/hugo.elias/graphics/x_bezier.htm
But you'll have to delay the drawing of each line segment until one frame later, so that you have the start and end points, plus the next and previous points available for the calculation.

so, as I see the problem of jagged edge of freehand made curve, when the mouse are moved very fast, is not solved !!! In my opinion there are need to work around with the polling frequency of mousemove event in the system i.e. using different mouse driver or smf.. And the second way is the math.. using some kind of algorithm, to accuratly bend the straight line between two points when the mouse event is polled out.. For clear view you can compare how is drawed free hand line in photoshop and how in mspaint.. thanks folks.. ;)

I think you need to look into the Device Context documentation for wxWidgets.
I have some code that draws like this:
//screenArea is a wxStaticBitmap
int startx, starty;
void OnMouseDown(wxMouseEvent& event)
{
screenArea->CaptureMouse();
xstart = event.GetX();
ystart = event.GetY();
event.Skip();
}
void OnMouseMove(wxMouseEvent& event)
{
if(event.Dragging() && event.LeftIsDown())
{
wxClientDC dc(screenArea);
dc.SetPen(*wxBLACK_PEN);
dc.DrawLine(startx, starty, event.GetX(), event.GetY());
}
startx = event.GetX();
starty = event.GetY();
event.Skip();
}
I know it's C++ but you said the language was irrelevant, so I hope it helps anyway.
This lets me do this:
which seems significantly smoother than your example.

Interpolating mouse movements with line segments is fine, GIMP does it that way, too, as the following screenshot from a very fast mouse movement shows:
So, smoothness comes from a high frequency of mouse move events. WxWidgets can do that, as the example code for a related question demonstrates.
The problem is in your code, Heinrich. Namely, drawing into a large bitmap first and then copying the whole bitmap to the screen is not cheap! To estimate how efficient you need to be, compare your problem to video games: a smooth rate of 30 mouse move events per second correspond to 30fps. Copying a double buffer is no problem for modern machines, but WxHaskell is likely not optimized for video games, so it's not surprising that you experience some jitter.
The solution is to draw only as much as necessary, i.e. just the lines, directly on the screen, for example as shown in the link above.

I agree with harviz - the problem isn't solved. It should be solved on the operating system level by recording mouse movements in a priority thread, but no operating system I know of does that. However, the app developer can also work around this operating system limitation by interpolating better than linear.
Since mouse movement events don't always come fast enough, linear interpolation isn't always enough.
I experimented a little bit with the spline idea brought up by Rocketmagnet.
Instead of putting a line between two points A and D, look at the point P preceding A and use a cubic spline with the following control points B = A + v' and C = D - w', where
v = A - P,
w = D - A,
w' = w / 4 and
v' = v * |w| / |v| / 4.
This means we fall into the second point with the same angle as the line interpolation would, but go out a starting point in the same angle the previous segment came in, making the edge smooth. We use the length of the segment for both control point distances to make the size of the bend fit its proportion.
The following picture shows the result with very few data points (indicated in grey).
The sequence starts at the top left and ends in the middle.
There is still some level of uneasiness here which may be alleviated if one uses both the previous and the next point to adjust for both angles, but that would also mean to draw one point less than what one has got. I find this result already satisfactory, so I didn't try.

Related

How to identify if a set of lines is similar to a shape

Currently I have a program that allows the user to paint on it by capturing the mouse position every 0.05 seconds and drawing a line between a point and the next. With that setup I am looking for a way to identify shapes like a circle, a rectangle or the letter 'P'.
My current algorithm divides the screen on sections, then marks the sections with points recorded by the player and makes a matrix with the marked sections, then compares that matrix with every shape matrix.
This lacks any kind of support for rotations, sizes or positions. Also the control of the threshold is tricky returning in most cases fake results.
I need an algorithm that allows to identify for example a ' P ' as a ' P '.
Note: My current application is running on a c++ framework so any libraries or tools are welcome but I am interested on the algorithm behind.
Edit: After thinking around the problem I have changed the current grid on the screen, instead of that I capture the points and shift them to resize
the shape so it fits on a grid and over that grid compare with the known shapes.
Picture of the process
This solves the position and size problems while being fast enough, also rotating the input and then resizing in a loop may solve the rotation problem (seems though it would have an high cost and won't be very reliable)
I would gladly welcome alternative methods of handling shape comparison or the rotation.
After thinking around the problem I have changed the current grid on the screen, instead of that I capture the points and shift them to resize
the shape so it fits on a grid and over that grid compare with the known shapes.
Picture of the process
This solves the position and size problems while being fast enough, also rotating the input and then resizing in a loop may solve the rotation problem (seems though it would have an high cost and won't be very reliable)

drag drawLine jcanvas optimisation coordinates

considering this basic case, one may expect the coordinates of the layer to be updated... but they would not.
Instead, there is the possibility of remembering the starting point, compute the mouse offset and then update the coordinates, like in this test but... the effect is quite extreme.
Expected : point x1,y1 is static
Result : point x1,y1 moves incredibly fast
If setting coordinates to constant, the drag remains the same.
The main problem here is that drag action applies to the whole layer.
Fix : apply the modification at the end of the drag, like in this snippet.
But it is relatively ugly. Anyone has a better way to
get on the run the actual coordinates of the points of the line
manage to keep a point of the line static while the others are moving
Looking forward your suggestions,
In order to maintain the efficiency of dragging layers, jCanvas only offsets the x and y properties for any draggable layer (including paths). Therefore, when dragging, you can compute the absolute positions of any set of path coordinates using something along these lines:
var absX1 = layer.x + layer.x1;
var absY1 = layer.y + layer.y1;
(assuming layer references a jCanvas layer, of course)

What's a good way to optimise rendering a 2D tile game in XNA?

EDIT: I've opted for the second approach as I got 150+ fps even when all 3 tile layers fill the entire screen.
EDIT 2: I read a lot about vertex buffer objects and how they would be great for static geometry and although I still have no idea how to turn my 2D tiles into a VBO and store it on the GPU memory, it definitely seems like the way to go if anyone else is looking for a fast way to render static geometry/quads.
I'm making a game like Super Meat Boy and was wondering if it would be better/faster to store level tiles in an array list and do a camera bounds overlap test to see if it should be rendered.
foreach(Tile tile in world.tiles) {
if(Overlap(camera.bounds, tile))
render(tile);
}
Or would a 2D array storing every grid square and only reading off between camera bounds be better?
int left = (int)(camera.position.x - camera.width/2);
int right = (int)(camera.position.x + camera.width/2) + 1;
int top = (int)(camera.position.y - camera.height/2); // WHY XNA DO YOU UPSIDE DOWN!!!
int bottom = (int)(camera.position.y + camera.width/2) + 1;
for(int x = left; x < right; x++) {
for(int y = top; y < bottom; y++) {
render(world.tiles[x][y]);
}
}
The camera can fit 64*36 tiles on screen which is 2300 odd tiles to read off using the latter approach but is doing an overlap test with every tile in the level any better? I read an answer about joining matching adjacent tiles into a larger quad and just repeating the texture (although I'm using a texture atlas so I'm not sure how to repeat a region on a texture).
Cheers guys.
As per my past experience I can share the details. In 2D map, normally map is like 0 - N long. Now N is far longer then screen size. Now at first I tried to loading everything at once. But it is bit of a overhead. Ok, it was like very much of a overhead. And I endup with 0 FPS. As I want different kind of object for me. So, even repeating same object and saving memory is not working. Then I tried bounding things with reference to screen. So, objects are there and but they are not getting rendered. So, it is moved from away from draw pipe line. And game back to life.
Now, for further performance with C# 4.0 I can use TPL and async and await with draw. It is like better version of threading. So, you can throw stuff there and let it be render at will.
Here is deal wiht XNA or any kinda graphics library. There is complete graphics rendering pipeline. And that makes things whole lot slow. Specifically if PC is old and only have 64MB graphics card to support only wide screen. Your game will be deployed to any kinda machine right??!!
So, if I explain in language of XNA, update is simple code and run it as fast as it can, there is nothing to stop it. But draw is has complete pipe line ahead of it. And that is sole reason of having begin and end. So, after end it can start pushing things to pipe line. [Here] (http://classes.soe.ucsc.edu/cmps020/Winter11/readings/hlsl.pdf) article for reference.
So, here is a deal rendering pipeline is needed but there is no need that is should be slow and blocking. Just make it multi-threaded and things will quite faster for you. If you want more terse then you have to use C# at it fullest including Linked list and stuff. But that will be like last stage.
I hope I have given enough details to provide you an answer. Please let me know if any further details needed.

Scrolling parallax background, infinitely repeated in libgdx

I'm making a 2D sidescrolling space shooter-type game, where I need a background that can be scrolled infintely (it is tiled or wrapped repeatedly). I'd also like to implement parallax scrolling, so perhaps have one lowest background nebula texture that barely moves, a higher one containing far-away stars that barely moves and the highest background containing close stars that moves a lot.
I see from google that I'd have each layer move 50% less than the layer above it, but how do I implement this in libgdx? I have a Camera that can be zoomed in and out, and in the physical 800x480 screen could show anything from 128x128 pixels (a ship) to a huge area of space featuring the textures wrapped multiple times on their edges.
How do I continuosly wrap a smaller texture (say 512x512) as if it were infinitely tiled (for when the camera is zoomed right out), and then how do I layer multiple textures like these, keep them together in a suitable structure (is there one in the libgdx api?) and move them as the player's coords change? I've looked at the javadocs and the examples but can't find anything like this problem, apologies if it's obvious!
Hey I am also making a parrallax background and trying to get it to scroll.
There is a ParallaxTest.java in the repository, it can be found here.
this file is a standalone class, so you will need to incorporate it into your game how you want. and you will need to change the control input since its hooked up to use touch screen/mouse.
this worked for me. as for repeated bg, i havent gotten that far yet, but i think you just need to basic logic as in, ok one screen away from the end, change the first few screens pos to line up at the end.
I have not much more to say regarding to the Parallax Scrolling than PFG already did. There is indeed an example in the repository under the test folder and several explanations around the web. I liked this one.
The matter with the background is really easy to solve. This and other related problems can be approached by using modular algebra. I won't go into the details because once shown is very easy to understand.
Imagine that you want to show a compass in your screen. You have a texture 1024x16 representing the cardinal points. Basically all you have is a strip. Letting aside the considerations about the real orientation and such, you have to render it.
Your viewport is 300x400 for example, and you want 200px of the texture on screen (to make it more interesting). You can render it perfectly with a single region until you reach the position (1024-200) = 824. Once you're in this position clearly there is no more texture. But since it is a compass, it's obvious that once you reach the end of it, it has to start again. So this is the answer. Another texture region will do the trick. The range 825-1023 has to be represented by another region. The second region will have a size of (1024-pos) for every value pos>824 && pos<1024
This code is intended to work as real example of a compass. It's very dirty since it works with relative positions all the time due to the conversion between the range (0-3.6) to (0-1024).
spriteBatch.begin();
if (compassorientation<0)
compassorientation = (float) (3.6 - compassorientation%3.6);
else
compassorientation = (float) (compassorientation % 3.6);
if ( compassorientation < ((float)(1024-200)/1024*3.6)){
compass1.setRegion((int)(compassorientation/3.6*1024), 0, 200, 16);
spriteBatch.draw(compass1, 0, (Gdx.graphics.getHeight()/2) -(-250 + compass1.getTexture().getHeight()* (float)1.2), Gdx.graphics.getWidth(), 32 * (float)1.2);
}
else if (compassorientation > ((float)(1024-200)/1024*3.6)) {
compass1.setRegion((int)(compassorientation/3.6*1024), 0, 1024 - (int)(compassorientation/3.6*1024), 16);
spriteBatch.draw(compass1, 0, (Gdx.graphics.getHeight()/2) -(-250 + compass1.getTexture().getHeight()* (float)1.2), compass1.getRegionWidth()/200f * Gdx.graphics.getWidth() , 32 * (float)1.2);
compass2.setRegion(0, 0, 200 - compass1.getRegionWidth(), 16);
spriteBatch.draw(compass2, compass1.getRegionWidth()/200f * Gdx.graphics.getWidth() , (Gdx.graphics.getHeight()/2) -(-250 + compass1.getTexture().getHeight()* (float)1.2), Gdx.graphics.getWidth() - (compass1.getRegionWidth()/200f * Gdx.graphics.getWidth()) , 32 * (float)1.2);
}
spriteBatch.end();
You can use setWrap function like below:
Texture texture = new Texture(Gdx.files.internal("images/background.png"));
texture.setWrap(Texture.TextureWrap.Repeat, Texture.TextureWrap.Repeat);
It will draw background repeatedly! Hope this help!
Beneath where you initialize your Texture for the object. Then beneath that type in this
YourTexture.setWrap(Texture.TextureWrap.Repeat, Texture.TextureWrap.Repeat);
Where YourTexture is your texture that you want to parallax scroll.
In Your render file type in this code.
batch.draw(YourTexture,0, 0, 0 , srcy, Gdx.graphics.getWidth(),
Gdx.graphics.getHeight());
srcy +=10;
It is going to give you an error so make a variable called srcy. It is nothing too fancy.
Int srcy

High resolution and high framerate mouse coordinates on OSX? (Or other solution?)

I'd like to get mouse movements in high resolution and high framerate on OSX.
"High framerate" = 60 fps or higher (preferably > 120)
"High resolution" = Subpixel values
Problem
I've got an opengl view running at about the monitor refresh rate, so it's ~60 fps. I use the mouse to look around, so I've hidden the mouse cursor and I'm relying on mouse delta values.
The problem is the mouse events come in at much too low framerate, and values are snapped to integer (whole pixels). This causes a "choppy" viewing experience. Here's a visualization of mouse delta values over time:
mouse delta X
^ xx
2 | x x x x xx
| x x x x xx x x x
0 |x-x-x--xx-x-x-xx--x-x----x-xx-x-----> frame
|
-2 |
v
This is a typical (shortened) curve created from the user moving the mouse a little bit to the right. Each x represent the deltaX value for each frame, and since deltaX values are rounded to whole numbers, this graph is actually quite accurate. As we can see, the deltaX value will be 0.000 one frame, and then 1.000 the next, but then it will be 0.000 again, and then 2.000, and then 0.000 again, then 3.000, 0.000, and so on.
This means that the view will rotate 2.000 units one frame, and then rotate 0.000 units the next, and then rotate 3.000 units. This happens while the mouse is being dragged with more or less constant speed. Nedless to say, this looks like crap.
So, how can I 1) increased the event framerate of the mouse? and 2) get subpixel values?
So far
I've tried the following:
- (void)mouseMoved:(NSEvent *)theEvent {
CGFloat dx, dy;
dx = [theEvent deltaX];
dy = [theEvent deltaY];
// ...
actOnMouse(dx,dy);
}
Well, this one was obvious. dx here is float, but values are always rounded (0.000, 1.000 etc.). This creates the graph above.
So the next step was to try and tap the mouse events before they enter the WindowServer, I thought. So I've created a CGEventTrap:
eventMask = (1 << kCGEventMouseMoved);
eventTap = CGEventTapCreate(kCGHIDEventTap, kCGHeadInsertEventTap,
0, eventMask, myCGEventCallback, NULL);
//...
myCGEventCallback(...){
double dx = CGEventGetDoubleValueField(event, kCGMouseEventDeltaX);
double dy = CGEventGetDoubleValueField(event, kCGMouseEventDeltaY);
}
Still values are n.000, although I believe the rate of event firing is a little higher. But it it's still not at 60 fps. I still get the chart above.
I've also tried setting the mouse sensitivity really high, and then scale the values down on my side. But it seems OSX adds some sort of acceleration or something—the values get really "unstable" and consequently unusable, and the rate of fire is still too low.
With no luck, I've been starting to follow the mouse events down the rabbit hole, and I've arrived at IOKit. This is scary for me. It's the mad hatter. The Apple documentation gets weird and seems to say "if you're this deep down, all you really need is header files".
So I have been reading header files. And I've found some interesting tidbits.
In <IOKit/hidsystem/IOLLEvent.h> on line 377 there's this struct:
struct { /* For mouse-down and mouse-up events */
UInt8 subx; /* sub-pixel position for x */
UInt8 suby; /* sub-pixel position for y */
// ...
} mouse;
See, it says sub-pixel position! Ok. Then on line 73 in <IOKit/hidsystem/IOLLParameter.h>
#define kIOHIDPointerResolutionKey "HIDPointerResolution"
Hmm.
All in all, I get the feeling OSX knows about sub-pixel mouse coordinates deep down, and there just has to be a way to read raw mouse movements every frame, but I've just no idea how to get those values.
Questions
Erh, so, what am I asking for?
Is there a way of getting high framerate mouse events in OSX? (Example code?)
Is there a way of getting sub-pixel mouse coordinates in OSX? (Example code?)
Is there a way of reading "raw" mouse deltas every frame? (Ie not rely on an event.)
Or, how do I get NXEvents or set HIDParameters? Example code? (So I can dig deeper into this on my own...)
(Sorry for long post)
(This is a very late answer, but one that I think is still useful for others that stumble across this.)
Have you tried filtering the mouse input? This can be tricky because filtering tends to be a trade-off between lag and precision. However, years ago I wrote an article that explained how I filtered my mouse movements and wrote an article for a game development site. The link is http://www.flipcode.com/archives/Smooth_Mouse_Filtering.shtml.
Since that site is no longer under active development (and may go away) here is the relevant excerpt:
In almost every case, filtering means averaging. However, if we simply average the mouse movement over time, we'll introduce lag. How, then, do we filter without introducing any side-effects? Well, we'll still use averaging, but we'll do it with some intelligence. And at the same time, we'll give the user fine-control over the filtering so they can adjust it themselves.
We'll use a non-linear filter of averaged mouse input over time, where the older values have less influence over the filtered result.
How it works
Every frame, whether you move the mouse or not, we put the current mouse movement into a history buffer and remove the oldest history value. So our history always contains X samples, where X is the "history buffer size", representing the most recent sampled mouse movements over time.
If we used a history buffer size of 10, and a standard average of the entire buffer, the filter would introduce a lot of lag. Fast mouse movements would lag behind 1/6th of a second on a 60FPS machine. In a fast action game, this would be very smooth, but virtually unusable. In the same scenario, a history buffer size of 2 would give us very little lag, but very poor filtering (rough and jerky player reactions.)
The non-linear filter is intended to combat this mutually-exclusive scenario. The idea is very simple. Rather than just blindly average all values in the history buffer equally, we average them with a weight. We start with a weight of 1.0. So the first value in the history buffer (the current frame's mouse input) has full weight. We then multiply this weight by a "weight modifier" (say... 0.2) and move on to the next value in the history buffer. The further back in time (through our history buffer) we go, the values have less and less weight (influence) on the final result.
To elaborate, with a weight modifier of 0.5, the current frame's sample would have 100% weight, the previous sample would have 50% weight, the next oldest sample would have 25% weight, the next would have 12.5% weight and so on. If you graph this, it looks like a curve. So the idea behind the weight modifier is to control how sharply the curve drops as the samples in the history get older.
Reducing the lag means decreasing the weight modifier. Reducing the weight modifier to 0 will provide the user with raw, unfiltered feedback. Increasing it to 1.0 will cause the result to be a simple average of all values in the history buffer.
We'll offer the user two variables for fine control: the history buffer size and the weight modifier. I tend to use a history buffer size of 10, and just play with the weight modifier until I'm happy.
If you are using the IOHIDDevice callbacks for the mouse you can use this to get a double value:
double doubleValue = IOHIDValueGetScaledValue(inIOHIDValueRef, kIOHIDTransactionDirectionTypeOutput);
The possibility of subpixel coordinates exists because Mac OS X is designed to be resolution independent. A square of 2x2 hardware pixels on a screen could represent a single virtual pixel in software, allowing the cursor to be placed at (x + 0.5, y + 0.5).
On any actual Mac using normal 1x scaling, you will never see subpixel coordinates because the mouse cursor cannot be moved to a fractional pixel position on the screen--the quantum of mouse movement is precisely 1 pixel.
If you need to get access to pointer device delta information at a lower level than the event dispatching system provides then you'll probably need to use the user-space USB APIs.

Resources