Which navigation methods would be the most performant and flexible for a game with a very large number of AI on a dynamic playfield? [closed] - performance

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 2 years ago.
Improve this question
I'm not 100% sure what factors are important when deciding whether to use Unity's NavMesh vs an advanced pathing algorithm such as HPA* or similar. When considering the mechanics below, what are the implications of using Unity's NavMesh vs rolling my own algorithms:
Grid based real-time building.
Large number of AI, friendly, hostile, neutral. Into the hundreds. Not all visible on screen at once but the playfield would be very large.
AI adheres to a hierarchy. Basically does things where AI entities issues commands, receive commands, and execute them in tandem with one-another. This could allow for advanced pathing to be done on a single unit that relays rough directions to others where they can commence lower-level pathing to save on performance.
World has a strong chance of being procedural. I wanted to go infinite proc-gen but I think that's out of scope. I don't intend on having the ground plane being very diverse in regards to actual height, just the objects placed on it.
Additions and removals within the environment will be dynamic at run-time by both the player and AI entities.
I've read some posts talking about how NavMesh can't handle runtime changes very well but have seen tutorials/store assets that are contrary to that. Maybe I could combine methods too? The pathing is going to be a heavy investment of time so any advice here would be greatly appreciated.

There are lots of solutions. It's way too much for a single answer, but here's some keywords to look into:
Swarm pathfinding
Potential fields
Flocking algorithms
Boids
Collision avoidance
Which one you use depends on how many units will be pathing at a time, whether they're pathing to the same place or different places, and how you want them to behave if multiple are going to the same place (eg. should they intentionally avoid collisions with each other? Take alternate routes when one is gridlocked? Or all just stupidly cram into the same hallway?)

Related

How does SVM work? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
Is it possible to provide a high-level, but specific explanation of how SVM algorithms work?
By high-level I mean it does not need to dig into the specifics of all the different types of SVM, parameters, none of that. By specific I mean an answer that explains the algebra, versus solely a geometric interpretation.
I understand it will find a decision boundary that separates the data points from your training set into two pre-labeled categories. I also understand it will seek to do so by finding the widest possible gap between the categories and drawing the separation boundary through it. What I would like to know is how it makes that determination. I am not looking for code, rather an explanation of the calculations performed and the logic.
I know it has something to do with orthogonality, but the specific steps are very "fuzzy" everywhere I could find an explanation.
Here's a video that covers one seminal algorithm quite nicely. The big revelations for me are (1) optimize the square of the critical metric, giving us a value that's always positive, so that minimizing the square (still easily differentiable) gives us the optimum; (2) Using a simple, but not-quite-obvious "kernel trick" to make the vector classifications compute easily.
Watch carefully at how unwanted terms disappear, leaving N+1 vectors to define the gap space in N dimensions.
I'll give you a very small details that will help you to continue understanding how SVM works.
make everything simple, 2 dimensions and linearly seperable data. The general idea in SVM is to find a hyperplan that maximize the margine between two classes. each of your data is a vector from the center. One you suggest a hyperplan, you project you data vector into the vector defining the hyperplan and then you see if the length of you projected vector is before or after the hyperplan and this is how you define your two classes.
This is very simple way of seeing it, and then you can go into more details by following some papers or videos.

What makes a task difficult or 'complex' to machine learn? Regarding complexity of pattern, not computationally [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
As many, I am interested in machine learning. I have taken a class on this topic, and have been reading some papers. I am interested in finding out what makes a problem difficult to solve with machine learning. Ideally, I want to learn about how the complexity of a problem regarding machine learning can be quantified or expressed.
Obviously, if a pattern is very noisy,one can look at the update techniques of different algorithms and observe that some particular machine learning algorithm incorrectly updates itself into the wrong direction due to a noisy label, but this is very qualitative arguing instead of some analytical / quantifiable reasoning.
So, how can the complexity of a problem or pattern be quantified to reflect the difficulty a machine learning algorithm faces? Maybe something from information theory or so, I really do not have an idea.
In thery of machine learning, the VC dimension of the domain is usually used to classify "How hard it is to learn it"
A domain said to have VC dimension of k if there is a set of k samples, such that regardless their label, the suggested model can "shatter them" (split them perfectly using some configuration of the model).
The wikipedia page offers the 2D example as a domain, with a linear seperator as a model:
The above tries to demonstrate that there is a setup of points in 2D, such that one can fit a linear seperator to split them, whatever the labels are. However, for every 4 points in 2D, there is some assignment of labels such that a linear seperator cannot split them:
Thus, the VC Dimension of 2D space with linear seperator is 3.
Also, if VC dimension of a domain and a model is infinty, it is said that the problem is not learnable
If you have strong enough mathematical background, and interested in the theory of machine learning, you can try following the lecture of Amnon Shashua about PAC

Is there some reliable way of detecting fake Facebook profiles [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 9 years ago.
Improve this question
I believe this could be interesting for many Facebook developers. Is there some reliable way of detecting fake profiles on Facebook? I am developing some games and applications for Facebook and have some virtual goods for sale. If player wants to play more he can create another profile or many others and play as much as he like. The idea here is to somehow detect this and stop them from doing so.
Best Regards!
Put validation on no. of friends.. if no. of friends < A PARTICULAR THRESHOLD, disallow user, else continue. Well.. That's only an opinion, not a solution.. :)
You can try using anomaly detection.
Make your 'features' number of likes/spam/friends/other relevant features you've found helpful, and use the algorithm to detect the anomalies.
Another approach could be supervised learning - but will require a labeled set of examples of "fake" and "real" users. The 'features' will be similar to these for anomaly detection.
Train your learning algorithm using the labeled set (usually referred as training set), and use the resulting classifier to decide if a new user is fake or not.
Some algorithms you can use are SVM, C4.5, KNN, Naive Bayes.
You can evaluate results for both methods using cross-validation (this requires a training set, of course)
If you want to learn more about machine learning approaches, I recommend taking the webcourse at coursera.

What are possible methods for creating a physical engine [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
This is a general question that comes to answer an issue that interests me for general knowledge and not to answer a specific problem.
I was wondering what are the available ways to implement a physical engine where objects interact with each other and with outside forces. As an example we can look at Angry Birds, or games like TIM. Where there are objects that "fly" through the air, collide and interact with each other, and are affected by the potential of the environment like gravity, winds and other "forces".
The model that I have thought about is that each object has an object (as an object of some class) and a thread that relate to it. Each time slot the thread get it "advances" the object in the space for some small dt. In this case you could have an "environment" object that can get a position in space and give you the equivalent force that is applied by the environment potential. What I can't exactly get is how the objects interact with each other?
Also, am I close in my direction? Are there other solutions and models for those problems, and are they better? What are the things I'm missing (I must be missing some things)?
the implementation is typically nothing like you describe, which is way too expensive. instead, everything is reduced to matrix transformations. points are lists of coordinates that are operated on by matrices that update them to the next time interval. the matrices are themselves calculated from the physics (they are a linear solution of the forces at that moment, more or less).
things get more complicated when you have very large differences in scale (say, for simulating stars in a galaxy). then you may use a more hierarchical approach so that critical (eg fast moving or, more accurately, strongly accelerating) points are updated more often than non-critical points. but even then the in-memory representation is very abstract and nothing like as direct as "one object per thing".

Algorithms for City Simulation? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
I want to create a city filled with virtual creatures.
Say like Sim City, where each creature walks around, doing it's own tasks.
I'd prefer the city to not 'explode' or do weird things -- like the population dies off, or the population leaves, or any other unexpected crap.
Is there a set of basic rules I can encode each agent with so that the city will be 'stable'? (Much like how for physics simulations, we have some basic rules that govern everything; is there a set of rules that governs how a simulation of a virtual city will be stable?)
I'm new to this area and have no idea what algorithms/books to look into. Insights deeply appreciated.
Thanks!
I would start with the game of Life.
Here is the original SimCity source code:
http://www.donhopkins.com/home/micropolis/micropolis-activity-source.tgz
It may be hard to find any general resources on the subject, because it is quite specific area.
I have implemented some population dynamics and I know that it is not easy to get all the behavior correct to ensure that the population does not die off or overgrows. It is relatively easy if you implement a simple scenario like in predator-prey model, but tends to get tricky as the number of factors increases.
Some advice:
Try to make behavior of agents parametrized
Optimize the behavior parameters using some soft method, a neural network, a genetic algorithm or a simple hillclimbing algorithm, optimizing a single parameter of the simulation (like the time before the whole population dies off combined with average growth factor)
Here is a pointer to some research on the topic, but be advised -- the population in this research study all died off.
http://www.nsf.gov/news/news_summ.jsp?cntn_id=104261

Resources