I am implementing a sliding window as a LPF to smooth my data.
As long as I have a window size of W, my final W elements (window slides from the beginning to the end) or first W elements (window slides from the end to beginning) will not be able to smoothed.
How does one deal with them?
Is there a good way to handle this?
Every kind of filter will shorten your data. Consider it shorter but better!
One tip I can offer is if you have an index, consider trimming each side by W/2, rather than chopping W off the front or back. This way is more accurate, because your new index reflects the smoothing.
Related
I'm trying to make an algorithm of any sort that can find the solution with the lowest move count. I'm coding in GML using GMS2 and it's for a game that I'm making myself.
This is a picture of the most simple level
The reason I call it a complicated 2d puzzle game is because of how many types of blocks there are. The goal is to connect the two blocks and the player gets a trophy for solving it in a low move count, but I don't want to have to find the most efficient solutions by hand for every level, especially some of the super complicated levels.
This is an example of a level with all block types in it
The black boxes with an X on them are immovable and act as obstacles
The two blocks you're supposed to connect and the small blue blocks are simple movable blocks and can be moved in any of the four cardinal directions as long as the space is open
The long blue blocks act the same way but require an open space their same size to move into a spot
The red blocks act as a normal movable block, but cannot be placed next to each other. They can still be diagonal from each other though
And the green blocks can only be moved in the directions indicated on the block, meaning only left and right or only up and down
My first thought was to make a depth first search algorithm to just brute force all possible game states until it finds the quickest solution but I'm unsure of how to pull that off.
If anyone has any ideas for methods I could use or information that could help at all then I'm open to all feedback since I don't really know much about algorithms or machine learning at all, thanks in advance!
The standard approach is a breadth-first search. Looking at this hashmap and native queues I believe you can code it in GML, though that wouldn't be my first choice. (Python would be because you can have complex objects as dictionary keys. Dictionary being what Python calls a hashmap. Without that you'll need to serialize/deserialize objects to use them as keys. Serialize being a complicated word meaning, "write out a text representation". Ideally a compact text representation.)
The idea looks like this:
Create hashmap of position and last move
initialize it with start position and last move is none
put start position into a queue of positions to analyze
while stuff in queue and not yet solved:
pop position from queue
for each valid move from position:
if resulting position is solved:
it is solved and we have final position
if resulting position is NOT in hashmap:
add position + move to hashmap
add position to queue
if no final position:
there is no answer
else:
path = []
prev_move = hashmap of final position
while prev_move is not none:
add prev_move to start of path
unto prev_move from final position to get new final position
And if it is solvable, then path will wind up being the shortest path from the starting position to any position where it is solved.
Note that the number of possible positions explodes, so you can only solve fairly small puzzles. With ideas like A* search you can make this slightly more efficient, but given how much backtracking tends to be involved, not much.
Currently I have a program that allows the user to paint on it by capturing the mouse position every 0.05 seconds and drawing a line between a point and the next. With that setup I am looking for a way to identify shapes like a circle, a rectangle or the letter 'P'.
My current algorithm divides the screen on sections, then marks the sections with points recorded by the player and makes a matrix with the marked sections, then compares that matrix with every shape matrix.
This lacks any kind of support for rotations, sizes or positions. Also the control of the threshold is tricky returning in most cases fake results.
I need an algorithm that allows to identify for example a ' P ' as a ' P '.
Note: My current application is running on a c++ framework so any libraries or tools are welcome but I am interested on the algorithm behind.
Edit: After thinking around the problem I have changed the current grid on the screen, instead of that I capture the points and shift them to resize
the shape so it fits on a grid and over that grid compare with the known shapes.
Picture of the process
This solves the position and size problems while being fast enough, also rotating the input and then resizing in a loop may solve the rotation problem (seems though it would have an high cost and won't be very reliable)
I would gladly welcome alternative methods of handling shape comparison or the rotation.
After thinking around the problem I have changed the current grid on the screen, instead of that I capture the points and shift them to resize
the shape so it fits on a grid and over that grid compare with the known shapes.
Picture of the process
This solves the position and size problems while being fast enough, also rotating the input and then resizing in a loop may solve the rotation problem (seems though it would have an high cost and won't be very reliable)
I need help with efficiently drawing/culling a series of opaque rectangles, in other words, this is a stack of index cards on a desk. The specifics are:
no rotations, so everything is simple integer coordinates, axis-aligned
cards are fully opaque
cards can have any integer X,Y position
all cards are the same size
I have a list of the cards in z-order
I think I have (essentially) two choices:
1) brute force painter's approach, where all cards within the desktop viewport are fully drawn, in reverse z-order. Pros: simple. Cons: a) requires an off-screen buffer to avoid flicker, b) potentially lots of time wasted on drawing expensive areas of each card when that area might end up being obscured, worst-case being the entire card getting covered.
2) an algorithm that generates a list of visible (or obscured) rectangles for every card, such that only visible portions are ever drawn.
Choice 2 is where I need advice, especially in terms of algorithms, and pro's and con's of a "smarter" draw cycle.
Any language/platform agnostic advice is appreciated. If it matters, this will be implemented on MS Windows.
Am open to any suggestions, including hybrid approaches. I realize a precise answer is likely very dependent on the particulars of the code, but I'd be happy even with generalized concepts at this point!
Additional notes: It will be possible to have thousands of cards stacked on top of each other, so I'm highly motivated to avoid a purely brute force painter's approach - at least without some sort of pre-processing to cull out fully obscured cards. The same goes for lots of cards that were closely tiled, worse case being only their borders showing - I would like to skip painting the complex innards in those cases, if possible.
What about painting only the contour line of each card from the bottom most to the top most? Then you can do a flood fill to paint inside of the contours. This way you would repaint only a few pixels corresponding to the borders where there are intersections.
Edit: Uploaded images to help me explain the idea.
The first step is mark the borders of the cards assigning their Z-order (top left image). This way, there are overwrites, but only on borders which are a little amount of pixels.
After that, you can paint the texture of the cards (lowest Z-order first) following two rules:
You start from the border and paint the blanks until reach a border;
If the border's Z-order is the current one, you paint it;
If the border's Z-order found is less than the current Z-order, you continue painting as it were a blank one;
Otherwise, you found a border with greater Z-order, so you skip that block;
Next card.
Hope it helps :)
OK, here's some loose pseudo code for how I think this problem can be solved.
Begin with a z-order sorted list of the cards. Each card has a list of visible rects (explained later), that needs to start out with just one rect, set to the card's full bounding box. The loop is begun with the lowest z-order card first.
Cards.SortZOrder();
foreach Card in Cards do
Card.ResetVisibleRects; // VisibleRects.DeleteAll; VisibleRects.Add(BoundingBox);
CurrentCard = Cards.Last;
TestCard = CurrentCard;
The idea here is that we're going to work upwards from our "current" card, and see what effect each higher card has on it. There are 3 possibilities as we test each higher card. It either completely misses, completely obscures, or partially obscures. For a complete miss, we ignore the test card, since it doesn't affect our current card. For a complete obscure, our current card gets culled. A partial overlap is where the list of visible rectangles comes in, since partial overlap can (potentially) split the lower rectangle into two. (It's easy to see how this plays out if you just grab two playing cards, or index cards. The top one causes the bottom one to either adjust one of it's sides, if they share any edge, or it causes the bottom one to split into two rects if they share no edges.)
Caveat: This is VERY unoptimized, unrolled code ... just for talking about the principles. And yes, I'm about to use "goto" ... mock me if you must.
[GetNextCard]
TestCard = Cards.NextHighest(TestCard);
[OverlapTest]
// Test the overlap of TestCard against all our VisibleRects.
// The first time through this test, CurrentCard will have only one
// rect in the VisibleRect list, but that rect may get split up later.
// OverlapTests() checks each rect in the VisibleRects list, and
// creates an Overlap record for any of the rects that do overlap,
// like: Overlap.RectIndex, Overlap.Type. It also summarizes the
// results into the .Summary field.
Result = CurrentCard.OverlapTests(TestCard);
case Result.Summary
none:
goto [GetNextCard];
complete:
CurrentCard.Culled = true;
// we're now done with this CurrentCard, so we move upwards
CurrentCard = TestCard;
goto [GetNextCard]
partial:
// since there was some overlap, we need to adjust,
// split, or delete some or all of our visible rectangles.
// (we won't delete them all, that would have been caught above)
foreach Overlap in Result.Overlaps
R = CurrentCard.VisibleRects[Overlap.RectIndex];
case Overlap.Type
partial: CurrentCard.SplitOrAdjust(R, TestCard);
complete: CurrentCard.Delete(R);
end case
// so we've either added new rects, or deleted some, but either
// way, we're done with this test card. We leave CurrentCard
// where it is and loop to look at the next higher card.
goto [GetNextCard]
The testing is done when CurrentCard = Cards.First since the topmost card is always fully visible.
Just a couple more thoughts here ...
I think this would be fairly straightforward in real code. The most complicated thing about it would be splitting a rectangle into two, and given the fact that it's all integer math, even that is trivial.
Also, this doesn't have to be performed every paint cycle. It only needs to be done when there's any change in contents, position, or z-order.
After a pass up the list, you're left with a paint-ready list of cards, each non-culled card having at least one rectangle that can potentially fall within the display's clipping/dirty region. When you paint a card you can examine its list of visible rectangles, and potentially be able to skip drawing portions of the card that might be expensive to render.
I'm working on a game, and I've come up with a rather interesting problem: clever ways to draw starfields.
It's a 2D game, so the action can scroll in the X and Y directions. In addition, we can adjust the scale to show more or less of the play area. I'd also like the starfield to have fake parallax to give an impression of depth.
Right now I'm doing this in the traditional way, by having a big array of stars, each of which is tagged by a 'depth' factor. To draw, I translate each star according to the camera position multiplied by the 'depth', so some stars move a lot, and some move a little. This all works fine, but of course since I have a finite number of stars in my array I have issues when the camera moves too far or we zoom out too much. This is will all work, but is involving lots of code and special cases.
This offends my sense of elegance. There has got be a better way of achieving this.
I've considered procedurally generating my stars, which allows me to have an unlimited number: e.g. by using a fixed seed and PRNG to determine the coordinates. I would need to divide the sky up into tiles, generate the seed by hashing the tile coordinates, and then draw, say, 100 stars per tile. This allows me to extend my starfield indefinitely in all directions while still only needing to consider the tiles that are visible --- but this doesn't work with the 'depth' factor, as this allows stars to stray outside their tile. I could simply use multiple layered non-parallax starfields using this algorithm but this strikes me as cheating.
And, of course, I need to do all this every frame, so it's got to be fast.
What do you all reckon?
Have a few layers of stars.
For each layer, use a seeded random number generator (or just an array) to generate the amount of blank space between a star and the next one (a poisson distibution, if you want to be picky about it). You want the stars pretty sparse, so the blank space will often be more than whole row. The back layers will be more dense than the front ones, obviously.
Use this to give yourself several tiles each (say) two screens wide. Scroll the starfield by keeping track of where that "first" star is for each layer.
The player won't notice the tiling, because you scroll the tiles at different rates for each layer, especially if you use a few layers that are each fairly sparse.
As stars in the background don't move as fast as those in the foreground, you could maybe make multi-layer tiles for the background and replace them with one-layer-ones when you've got time to do that. Oh, and how about repeating patterns in the background layers? This would maybe allow you to pregenerate all background tiles - you could still shift them in height and overlay multiple ones with random offsets or so to make it look random.
Is there anything wrong with wrapping the star field around in X and Y? Because of your depth, the wraparound distance should depend on the depth, but you can do that. Each recorded star at (x,y,depth) should appear at all points
[x + j * S * depth, y + k * S * depth]
for all integers j and k. S is a wraparound parameter. If S is 1 then wraparound happens immediately and all stars are always shown somewhere. If S is higher wraparound doesn't happen immediately and some stars are shown off screen. You'll probably want S big enough to ensure no repeats at maximum zoom out.
Each frame, render the stars on one single bitmap/layer. They are only dots, and so it will be faster than using any algorithm with multiple layers.
Now you need an infinite 2D-grid of 3D-boxes filled with a finite number of stars. For each box, you can define an individual RANDOM_SEED value, using its grid-coordinates. The stars in each box can be generated on-the-fly.
Remember to correct the perspective when you zoom: Each 3D-box has a near-rectangle (front-face) and a far-rectangle. You will see more stars of neighbouring boxes, whenever the far-rectangle or near-rectangle shrinks in your view.
Your far-rectangles should never be smaller than half the width of the near-rectangles, otherwise it might be troublesome: You might have to scan huge lists of stars where most of them are out of bounds. You can realize stars behind the far-rectangles via additional 2D-grids of 3D-boxes with other sizes and depths.
Why not combine the coordinates of the starfield 3D boxes to form the random number seed? Use a global "adjustment" if you want to produce different universes. That way you don't need to track the boxes you can't see because the contents are fixed by their location.
I want to write a paint program in the style of MS Paint.
For painting things on screen when the user moves the mouse, I have to wait for mouse move events and draw on the screen whenever I receive one. Apparently, mose move events are not sent very often, so I have to interpolate the mouse movement by drawing a line between the current mouse position and the previous one. In pseudocode, this looks something like this:
var positionOld = null
def handleMouseMove(positionNew):
if mouse.button.down:
if positionOld == null:
positionOld = positionNew
screen.draw.line(positionOld,positionNew)
positionOld = positionNew
Now my question: interpolating with straight line segments looks too jagged for my taste, can you recommend a better interpolation method? What method do GIMP or Adobe Photoshop implement?
Alternatively, is there a way to increase the frequency of the mouse move events that I receive? The GUI framework I'm using is wxWidgets.
GUI framework: wxWidgets.
(Programming language: Haskell, but that's irrelevant here)
EDIT: Clarification: I want something that looks smoother than straight line segments, see the picture (original size):
EDIT2: The code I'm using looks like this:
-- create bitmap and derive drawing context
im <- imageCreateSized (sy 800 600)
bitmap <- bitmapCreateFromImage im (-1) -- wxBitmap
dc <- memoryDCCreate -- wxMemoryDC
memoryDCSelectObject dc bitmap
...
-- handle mouse move
onMouse ... sw (MouseLeftDrag posNew _) = do
...
line dc posOld posNew [color := white
, penJoin := JoinRound
, penWidth := 2]
repaint sw -- a wxScrolledWindow
-- handle paint event
onPaint ... = do
...
-- draw bitmap on the wxScrolledWindow
drawBitmap dc_sw bitmap pointZero False []
which might make a difference. Maybe my choices of wx-classes is why I'm getting a rather low frequency of mouse move events.
Live demos
version 1 - more smooth, but more changing while you draw: http://jsfiddle.net/Ub7RV/1/
version 2 - less smooth but more stable: http://jsfiddle.net/Ub7RV/2/
The way to go is
Spline interpolation of the points
The solution is to store coordinates of the points and then perform spline interpolation.
I took the solution demonstrated here and modified it. They computed the spline after you stop drawing. I modified the code so that it draws immediately. You might see though that the spline is changing during the drawing. For real application, you probably will need two canvases - one with the old drawings and the other with just the current drawing, that will change constantly until your mouse stops.
Version 1 uses spline simplification - deletes points that are close to the line - which results in smoother splines but produce less "stable" result. Version 2 uses all points on the line and produces much more stable solution though (and computationally less expensive).
You can make them really smooth using splines:
http://freespace.virgin.net/hugo.elias/graphics/x_bezier.htm
But you'll have to delay the drawing of each line segment until one frame later, so that you have the start and end points, plus the next and previous points available for the calculation.
so, as I see the problem of jagged edge of freehand made curve, when the mouse are moved very fast, is not solved !!! In my opinion there are need to work around with the polling frequency of mousemove event in the system i.e. using different mouse driver or smf.. And the second way is the math.. using some kind of algorithm, to accuratly bend the straight line between two points when the mouse event is polled out.. For clear view you can compare how is drawed free hand line in photoshop and how in mspaint.. thanks folks.. ;)
I think you need to look into the Device Context documentation for wxWidgets.
I have some code that draws like this:
//screenArea is a wxStaticBitmap
int startx, starty;
void OnMouseDown(wxMouseEvent& event)
{
screenArea->CaptureMouse();
xstart = event.GetX();
ystart = event.GetY();
event.Skip();
}
void OnMouseMove(wxMouseEvent& event)
{
if(event.Dragging() && event.LeftIsDown())
{
wxClientDC dc(screenArea);
dc.SetPen(*wxBLACK_PEN);
dc.DrawLine(startx, starty, event.GetX(), event.GetY());
}
startx = event.GetX();
starty = event.GetY();
event.Skip();
}
I know it's C++ but you said the language was irrelevant, so I hope it helps anyway.
This lets me do this:
which seems significantly smoother than your example.
Interpolating mouse movements with line segments is fine, GIMP does it that way, too, as the following screenshot from a very fast mouse movement shows:
So, smoothness comes from a high frequency of mouse move events. WxWidgets can do that, as the example code for a related question demonstrates.
The problem is in your code, Heinrich. Namely, drawing into a large bitmap first and then copying the whole bitmap to the screen is not cheap! To estimate how efficient you need to be, compare your problem to video games: a smooth rate of 30 mouse move events per second correspond to 30fps. Copying a double buffer is no problem for modern machines, but WxHaskell is likely not optimized for video games, so it's not surprising that you experience some jitter.
The solution is to draw only as much as necessary, i.e. just the lines, directly on the screen, for example as shown in the link above.
I agree with harviz - the problem isn't solved. It should be solved on the operating system level by recording mouse movements in a priority thread, but no operating system I know of does that. However, the app developer can also work around this operating system limitation by interpolating better than linear.
Since mouse movement events don't always come fast enough, linear interpolation isn't always enough.
I experimented a little bit with the spline idea brought up by Rocketmagnet.
Instead of putting a line between two points A and D, look at the point P preceding A and use a cubic spline with the following control points B = A + v' and C = D - w', where
v = A - P,
w = D - A,
w' = w / 4 and
v' = v * |w| / |v| / 4.
This means we fall into the second point with the same angle as the line interpolation would, but go out a starting point in the same angle the previous segment came in, making the edge smooth. We use the length of the segment for both control point distances to make the size of the bend fit its proportion.
The following picture shows the result with very few data points (indicated in grey).
The sequence starts at the top left and ends in the middle.
There is still some level of uneasiness here which may be alleviated if one uses both the previous and the next point to adjust for both angles, but that would also mean to draw one point less than what one has got. I find this result already satisfactory, so I didn't try.