I am currently working on a user interface, in which it is possible, to add objects by clicking on the screen. I don't want any overlapping objects. While it is easy to detect, whether a collision between 2 objects occurred, i am still struggling with resolving these conflicts.
Currently, i am resolving the conflicts locally by moving the intruding object away from the collision. This, however, may lead to new collisions, which are resolved in the same manner. Unfortunately, there is no guarantee, that this process will ever stop.
Are there any standard problems this relates to or algorithms to use? Or any efficient solutions which are not prone to endless recursion?
Now that I thought of it more I think you just move all the objects that lie in the direction of the move that you are already doing. Doing it this way you basically solve the problem of possible endless recursion.
Since you should know the size of your objects to move and the direction of a smart movement this should solve the problem
"grid snap" is avoiding the problem but not a solution to the problem.
As Sarah pointed out you can solve this by moving your objects a random distance into one direction and then check again if there is a collision, then move the colliding object. However, this can lead to exponential growth of the problem.
Instead you can try to implement a light physics engine where your objects "bounce" against each other and using friction come to a halt after some time. Try to google for continuous collision avoidance.
The easiest way I can think of how you can resolve your problems is by implementing a "grid snap" behavior.
Basically, there are only predefined areas in your frame where the user can add UI elements---the grid cells. When a user drops a UI element on your frame, you should detect where in the grid does it fall mainly (you can choose your own behavior in case it falls equally across two or more grid cells). This way, you need not detect collision between two UI elements at all.
Edit
Well, I certainly did not anticipate the cases you outlined in your comment. If the sizes of your objects will vary by that much, I admit that "grid snap" may not be applicable to your situation---you might end up with a lot of empty space. I was thinking of something along the lines of Visual Basic when I composed my answer (VB, as far as I remember does implement some sort of grid snap behavior).
A minor point though: while you may have your user's best interests at heart by letting them have an exact control of the positioning of UI elements, think of how your users will interface with your program. Positioning things exactly on the screen using a mouse can be punishment and it may just backfire on you. The same, however, cannot be said if you are guaranteed that your users will always use a touchscreen device.
Related
Is there a way to animate wind for certain game objects?
For example, branches of trees should gently move, like there's a breeze in game. Not gameplay, more like a special background effect.
If it's not possible in code, what would be the best way to create proper sprite images?
I see a few options to actually achieve that Wind Effect.
SKFieldNode, It allows you to actually apply physics effects to nodes. And if you want real tree branches that can move based on the physics, you should combine SKFieldNode with SKPhysicsJoint. When you combine those two you can actually create indepedent Branch Nodes to receive some kind of force to simulate a wind effect. To understand what you can do with SKPhysicsJoint, check out this guide. This solution can get really complex and hard to achieve with superficial understanding of SpriteKit Engine, but you can create an amazing effect using it. Personally, I would not recommend if you have deadlines to attend, physics always get buggy if you lose your grasp on what you are doing, you may invest a lot of time trying to achieve this using physics.
Create different animation of your tree responding to wind movement and control which animation frame you should use at that specif case. I Would highly recommend this one, because you will have control over what is going on with you tree and people that play games don't pay that much attention to what is going on with background, altho is a good thing to think about it from the game experience.
Create your tree textures and change the anchor point to be at the place you want the effect to be more effective and responsive. The farthest the coordinate in the node is from the anchor points coordinate, less effect from your SKAction it will get.
Sorry for my English, is not my native language.
I am making a version of the Atari game "Centipede" for my computer science class. I need help creating the collision for my code. I need to make it so when the bullets hit a part of the centipede, the game detects it and the part that got hit goes away.
This is a pretty broad question, but I'll try to help in a general sense.
You need to read up on collision detection.
First you probably want to break your centipede down into individual rectangles. For each rectangle, check whether it's colliding with the bullet.
You might consider point-rectangle collision detection, where you check whether the point is inside the rectangle. This will work if your bullet is a small point.
Or you might consider rectangle-rectangle collision detection, where you check whether two rectangles are overlapping. Use this if your bullets are larger than a point. Even if your bullet is a circle, you can usually get away with this kind of collision detection.
Please try something, and if you get stuck, then please post a MCVE that demonstrates where you're stuck. Good luck.
A simple implementation would be as follows:
1. Designate an array that represents the pixel positions of the centipede
2. Designate another array that represents the pixel positions of the bullet
3. Update each array's values based on the sampling rate of your game and check to see if there is any overlap. (Ideally have this as a separate thread)
4.Any overlap is indicative of a collision, so have some sort of collision handler function, which deletes the part of the centipede hit, that is triggered anytime an overlap event occurs.
I'm considering implementing my own (toy) MVC framework, mainly just for practise and to have fun with it. I worked with such frameworks in the past but when thinking about how I would go about it a couple of questions arose.
So what puzzles me the most is how I should tackle the drawing of the visual elements. My idea was to implement each item's drawing logic in the item's class, organize them into a tree structure, like in WPF and and pass down some sort of canvas that the elements can draw on when traverse the tree.
I'm having doubts though, whether I should pass a canvas down an entire visual tree. Another interesting thing is the handling of overlaping elements and which to draw first. I thought the visual tree would take care of that by drawing elemtns in the order they appear in a depth first search. But then I thought that the newest element should be on top no matter how close it is to the root in the tree.
So basically I couldn't really find anything on implementation best practises or details when it comes to drawing the elements and I could use some friendly advice on this or if you could point to some material that covers this it would be more than welcome.
The MVC pattern typically doesn't tackle such granular details. It ultimately comes down to decomposing the problem into three broad domains: data and logic, user input, and user output.
I'm having doubts though, whether I should pass a canvas down an
entire visual tree.
How come? From a performance or coupling/responsibility perspective?
If it's performance, this is a very solid start. You do have to descend down the tree and redraw everything by default, but that can be mitigated by turning your hierarchy into an accelerator and keeping track of which portions of the screen/canvas/image need to be redrawn ("dirty regions"). Only descend down the branches that overlap this dirty region.
For the dirty regions, you can break up your canvas into a grid. As widgets need updating, mark the region(s) they occupy as needing to be redrawn. Only redraw widgets occupying those grid cells which are marked as needing to be redrawn. If you want to get really elaborate and mitigate overdraw to a minimum, you can use a quad-tree (but typically overkill for all but the most dynamic kind of systems with elaborate animating content and things like that).
It might be tempting to make this problem easier to solve to double-buffer everything and have children draw into their parents' canvases, but that's a route to gain some immediate performance in exchange for a large performance barrier at a design-level in the form of memory consumption and cache efficiency. I don't recommend this approach: double-buffer the window contents to avoid flickery artifacts, but not every single control inside of it.
If it's about coupling and responsibilities, often it's overkill from a UI context to try to decouple the rendering of a widget from the widget itself. Decoupling rendering from entities is common in game architectures through entity-component systems which would provide rendering components (typically in the form of dumb data) and defer the rendering functionality to systems, but those take a great deal of work upfront to implement for tremendous flexibility which you might never need in this kind of context.
Another interesting thing is the handling of overlaping elements and
which to draw first. I thought the visual tree would take care of that
by drawing elemtns in the order they appear in a depth first search.
But then I thought that the newest element should be on top no matter
how close it is to the root in the tree.
The tree doesn't have to be this rigid thing. You can send siblings to the front of a child list or to the back to affect drawing order. Typically z-order changes don't occur so frequently and most of the time you'd be better off this way than transferring a great overhead to sorting the drawing on the fly as you are rendering.
Mostly I just recommend keeping it simple, especially if this is your first attempt constructing a general-purpose MVC framework. You're far more likely to err on the side of making things too complicated and painting yourself in a corner. Simple designs are pliable designs.
I have an application where I need to move a number of objects around on the screen in a random fashion and they can not bump into each other. I'm looking for an algorithm that will allow me to generate the paths that don't create collisions and can continue for an indefinite time (i.e.: the objects keep moving around until a user driven event removes them from the program).
I'm not a game programmer but I think this looks like an AI problem and you guys probably solve it with your eyes closed. From what I've read A* seems to be the recommended 'basic idea' but I don't really want to invest a lot of time into it without some confirmation.
Can anyone shed some light on an approach? Anti-gravity movement maybe?
This is to be implemented on iOS, if that is important
New paths need to be generated at the end of each path
There is no visible 'grid'. Movement is completely free in 2D space
The objects are insects that walk around the screen until they are killed
A* is an algorithm to find the shortest path between a start and a goal configuration (in terms of whatever you define as short: common are e.g. euclidean distance, cost or time, angular distance...). Your insects seem not to have a specific goal, they don't even need a shortest path. I would certainly not go for A*. (By the way, since you are having a dynamic environment, D* would have been an idea - still it's meant to find a path from A to B).
I would tackle the problem as follows:
Random Paths and following them
For the random paths I see two methods. The first would be a simple random walk (click here to see a nice 2D animation with explanations), which can suffer from jittering and doesn't look too nice. The second one needs a little bit more detailed explanations.
For each insect generate four random points around them, maybe starting on a sinusoid. With a spline interpolation generate a smooth curve between those points. Take care of having C1 (in 2D) or C2 (in 3D) continuity. (Suggestion: Hermite splines)
With Catmull-Rom splines you can find your configurations while moving along the curve.
An application of a similar approach can be found in this blog post about procedural racetracks, also a more technical (but still not too technical) explanation can be found in these old slides (pdf) from a computer animations course.
When an insect starts moving, it can constantly move between the second and third point, when you always remove the first and append a new point when the insect reaches the third, thus making that the second point.
If third point is reached
Remove first
Append new point
Recalculate spline
End if
For a smoother curve add more points in total and move somewhere in the middle, the principle stays the same. (Personally I only used this in fixed environments, it should work in dynamic ones as well though.)
This can, if your random point generation is good (maybe you can use an approach similar to the one provided in the above linked blog post, or have a look at algorithms on the PCG Wiki), lead to smooth paths all over the screen.
Avoid other insects
To avoid other insects, three different methods come to my mind.
Bug algorithms
Braitenberg vehicles
An application of potential fields
For the potential fields I recommend reading this paper about dynamic motion planning (pdf). It's from robotics, but fairly easy to apply to your problem as well. You can just use the robots next spline point as the goal and set its velocity to 0 to apply this approach. However, it might be a bit too much for your simple game.
A discussion of Braitenberg vehicles can be found here (pdf). The original idea was more of a technical method (drive towards or away from a light source depending on how your motor is coupled with the photo receptor) and is often used to show how we apply emotional concepts like fear and attraction to other objects. The "fear" behaviour is an approach used for obstacle avoidance in robotics as well.
The third and probably simplest method are bug algorithms (pdf). I always have problems with the boundary following, which is a bit tricky. But to avoid another insect, these algorithms - no matter which one you use (I suggest Bug 1 or Tangent Bug) - should do the trick. They are very simple: Move towards your goal (in this application with the catmull-rom splines) until you have an obstacle in front. If the obstacle is close, change the insect's state to "obstacle avoidance" and run your bug algorithm. If you give both "colliding" insects the same turn direction, they will automatically go around each other and follow their original path.
As a variation you could just let them turn and recalculate a new spline from that point on.
Conclusion
Path finding and random path generation are different things. You have to experiment around what looks best for your insects. A* is definitely meant for finding shortest paths, not for creating random paths and following them.
You cannot plan the trajectories ahead of time for an indefinite duration !
I suggest a simpler approach where you just predict the next collision (knowing the positions and speeds of the objects allows you to tell if they will collide and when), and resolve it by changing the speed or direction of either objects (bounce before objects touch).
Make sure to redo a check for collisions in case you created an even earlier collision !
The real challenge in your case is to efficiently predict collisions among numerous objects, a priori an O(N²) task. You will accelerate that by superimposing a coarse grid on the play field and look at objects in neighboring cells only.
It may also be possible to maintain a list of object pairs that "might interfere in some future" (i.e. considering their distance and relative speed) and keep it updated. Checking that a pair may leave the list is relatively easy; efficiently checking for new pairs needing to enter the list is not.
Look at this and this Which described an AI program to auto - play Mario game.
So in this link, what the author did was using a A* star algorithm to guide Mario Get to the right border of the screen as fast as possible. Avoid being hurt.
So the idea is for each time frame, he will have an Environment which described the current position of other objects in the scene and for each action (up, down left, right and do nothing) , he calculate its cost function and made a decision of the next movement based on this.
Source: http://www.quora.com/What-are-the-coolest-algorithms
For A* you would need a 2D-Grid even if it is not visible. If I get your idea right you could do the following.
Implement a pathfinding (e.g. A*) then just generate random destination points on the screen and calculate the path. Once your insect reaches the destination, generate another destination point/grid-cell and proceed until the insect dies.
As I see it A* would only make sence if you have obstacles on the screen the insect should navigate around, otherwise it would be enough to just calculate a straight vector path and maybe handle collision with other insects/objects.
Note: I implemented A* once, later I found out that Lee's Algorithm
pretty much does the same but was easier to implement.
Consider a Hamiltonian cycle - the idea is a route that visits all the positions on a grid once (and only once). If you construct the cycle in advance (i.e. precalculate it), and set your insects off with some offset between them, they will never collide, simply because the path never intersects itself.
Also, for bonus points, Hamiltonian paths tend to 'wiggle about', and because it's a loop you can predict (and precalculate) the path into the indefinite future.
You can always use the nodes of the grid as knot points for a spline to smooth the movement, or even randomly shift all the points away from their strict 2d grid positions, until you have the desired motion.
Example Hamiltonian cycle from Wikimedia:
On a side note, if you want to generate such a path, consider constructing a loop through many points and just moving the points around in such a manner that they never intersect an existing edge. With some encouragement to move into gaps and away from each other, they should settle into some long, never-intersecting path. Store the result and use for your loop.
I am building a web application using .NET 3.5 (ASP.NET, SQL Server, C#, WCF, WF, etc) and I have run into a major design dilemma. This is a uni project btw, but it is 100% up to me what I develop.
I need to design a system whereby I can take an image and automatically crop a certain object within it, without user input. So for example, cut out the car in a picture of a road. I've given this a lot of thought, and I can't see any feasible method. I guess this thread is to discuss the issues and feasibility of achieving this goal. Eventually, I would get the dimensions of a car (or whatever it may be), and then pass this into a 3d modelling app (custom) as parameters, to render a 3d model. This last step is a lot more feasible. It's the cropping issue which is an issue. I have thought of all sorts of ideas, like getting the colour of the car and then the outline around that colour. So if the car (example) is yellow, when there is a yellow pixel in the image, trace around it. But this would fail if there are two yellow cars in a photo.
Ideally, I would like the system to be completely automated. But I guess I can't have everything my way. Also, my skills are in what I mentioned above (.NET 3.5, SQL Server, AJAX, web design) as opposed to C++ but I would be open to any solution just to see the feasibility.
I also found this patent: US Patent 7034848 - System and method for automatically cropping graphical images
Thanks
This is one of the problems that needed to be solved to finish the DARPA Grand Challenge. Google video has a great presentation by the project lead from the winning team, where he talks about how they went about their solution, and how some of the other teams approached it. The relevant portion starts around 19:30 of the video, but it's a great talk, and the whole thing is worth a watch. Hopefully it gives you a good starting point for solving your problem.
What you are talking about is an open research problem, or even several research problems. One way to tackle this, is by image segmentation. If you can safely assume that there is one object of interest in the image, you can try a figure-ground segmentation algorithm. There are many such algorithms, and none of them are perfect. They usually output a segmentation mask: a binary image where the figure is white and the background is black. You would then find the bounding box of the figure, and use it to crop. The thing to remember is that none of the existing segmentation algorithm will give you what you want 100% of the time.
Alternatively, if you know ahead of time what specific type of object you need to crop (car, person, motorcycle), then you can try an object detection algorithm. Once again, there are many, and none of them are perfect either. On the other hand, some of them may work better than segmentation if your object of interest is on very cluttered background.
To summarize, if you wish to pursue this, you would have to read a fair number of computer vision papers, and try a fair number of different algorithms. You will also increase your chances of success if you constrain your problem domain as much as possible: for example restrict yourself to a small number of object categories, assume there is only one object of interest in an image, or restrict yourself to a certain type of scenes (nature, sea, etc.). Also keep in mind, that even the accuracy of state-of-the-art approaches to solving this type of problems has a lot of room for improvement.
And by the way, the choice of language or platform for this project is by far the least difficult part.
A method often used for face detection in images is through the use of a Haar classifier cascade. A classifier cascade can be trained to detect any objects, not just faces, but the ability of the classifier is highly dependent on the quality of the training data.
This paper by Viola and Jones explains how it works and how it can be optimised.
Although it is C++ you might want to take a look at the image processing libraries provided by the OpenCV project which include code to both train and use Haar cascades. You will need a set of car and non-car images to train a system!
Some of the best attempts I've see of this is using a large database of images to help understand the image you have. These days you have flickr, which is not only a giant corpus of images, but it's also tagged with meta-information about what the image is.
Some projects that do this are documented here:
http://blogs.zdnet.com/emergingtech/?p=629
Start with analyzing the images yourself. That way you can formulate the criteria on which to match the car. And you get to define what you cannot match.
If all cars have the same background, for example, it need not be that complex. But your example states a car on a street. There may be parked cars. Should they be recognized?
If you have access to MatLab, you could test your pattern recognition filters with specialized software like PRTools.
Wwhen I was studying (a long time ago:) I used Khoros Cantata and found that an edge filter can simplify the image greatly.
But again, first define the conditions on the input. If you don't do that you will not succeed because pattern recognition is really hard (think about how long it took to crack captcha's)
I did say photo, so this could be a black car with a black background. I did think of specifying the colour of the object, and then when that colour is found, trace around it (high level explanation). But, with a black object in a black background (no constrast in other words), it would be a very difficult task.
Better still, I've come across several sites with 3d models of cars. I could always use this, stick it into a 3d model, and render it.
A 3D model would be easier to work with, a real world photo much harder. It does suck :(
If I'm reading this right... This is where AI shines.
I think the "simplest" solution would be to use a neural-network based image recognition algorithm. Unless you know that the car will look the exact same in each picture, then that's pretty much the only way.
If it IS the exact same, then you can just search for the pixel pattern, and get the bounding rectangle, and just set the image border to the inner boundary of the rectangle.
I think that you will never get good results without a real user telling the program what to do. Think of it this way: how should your program decide when there is more than 1 interesting object present (for example: 2 cars)? what if the object you want is actually the mountain in the background? what if nothing of interest is inside the picture, thus nothing to select as the object to crop out? etc, etc...
With that said, if you can make assumptions like: only 1 object will be present, then you can have a go with using image recognition algorithms.
Now that I think of it. I recently got a lecture about artificial intelligence in robots and in robotic research techniques. Their research went on about language interaction, evolution, and language recognition. But in order to do that they also needed some simple image recognition algorithms to process the perceived environment. One of the tricks they used was to make a 3D plot of the image where x and y where the normal x and y axis and the z axis was the brightness of that particular point, then they used the same technique for red-green values, and blue-yellow. And lo and behold they had something (relatively) easy they could use to pick out the objects from the perceived environment.
(I'm terribly sorry, but I can't find a link to the nice charts they had that showed how it all worked).
Anyway, the point is that they were not interested (that much) in image recognition so they created something that worked good enough and used something less advanced and thus less time consuming, so it is possible to create something simple for this complex task.
Also any good image editing program has some kind of magic wand that will select, with the right amount of tweaking, the object of interest you point it on, maybe it's worth your time to look into that as well.
So, it basically will mean that you:
have to make some assumptions, otherwise it will fail terribly
will probably best be served with techniques from AI, and more specifically image recognition
can take a look at paint.NET and their algorithm for their magic wand
try to use the fact that a good photo will have the object of interest somewhere in the middle of the image
.. but i'm not saying that this is the solution for your problem, maybe something simpler can be used.
Oh, and I will continue to look for those links, they hold some really valuable information about this topic, but I can't promise anything.