I'm working on a game in Yoyo Game Maker where I have items moving along conveyor belts. As the items only move in one direction, I thought it would make most sense to use a queue or queue-like data structure to store the items. However, to be able to render the items, I need to be able to read all of them at any point in the queue, not just the head or tail.
[[a] [b] [c] [d]]
|
V
a <- [[ ] [b] [c] [d]] <- e
|
V
[[b] [c] [d] [e]]
| | | |
V V V V
b c d e
I could simply use an array that manually moves all its values forward by one slot every turn (using a for loop), but somehow that seems inefficient, laggy, or at the very least, bad form. My programming instincts recoil at the thought of using such a system, anyway.
Is this a correct assumption to make? Is an array really the best way of implementing a structure like this? Should I even be worried about efficiency, or are the differences in this case negligible?
Some advice or examples (in any programming language) would be greatly appreciated.
It might "seem" inefficient to use an array, but it most likely won't be. Take into consideration how many items will actually be on the conveyor belt at any single time. If you want quick random access to any index in a data structure, you must use an array, a DS List or a DS Grid(wouldn't make sense here, tough).
Using a DS List, you can use ds_list_delete(your_list, 0) to 'dequeue', much like a DS Queue, and ds_list_insert(your_list, 0, value) to 'enqueue' an item.
Iterating through the list is then very simple:
for ( var i = 0; i < ds_list_size( your_list ); i++ ) {
var item = your_list[|i];
}
It might also be relevant to add, that, in a game I'm working on, objects are build using components, which basically means that all of the enemies, the player, and such have a for-loop in their Step events to iterate through and update all components if need be. At most I have about 80 such objects in the game at any time and performance hasn't been a problem yet.
You should always first try to get a working solution, then testing it in conditions specific to your game: eg. if you are going to need a 100 items in the structure, try that, and if the performance is not satisfactory, only then optimize.
I had made a game in game maker which had a similar requirement. If you know about the frog game it had some woods floating on a river.
Movement: For that, I had created some number of the wood object which moves in the particular direction. when frog collides with any one of them he/she starts to float in that direction. Also, if it goes out if window in one direction I would place it back at the other side of window.
Picking: I think to pick up one of the item, You will need to define the collision inside each object. After collision you can define you next steps.
Related
I’ve been a programming student for about six months. I’d been wanting to write a chess game since a long time, and finally made it. I’m very happy with the result, however, there’s a point I don’t know how to address. The AI is based on an alpha-beta pruning #minimax method that chooses the best move on the basis of the best possible outcome for the current player, with a depth default value of 3, which means the computer can think ahead 3 turns. The computer does choose the correct move, but the method in its current implementation is very slow.
#provisional makes and 'unmakes' a possible move, and returns the value returned from the code block. The evaluation function #evaluate is very simple, it’s just a sum of the material value of the pieces and their location values according to how are they placed on the board.
I’d really appreciate some light here, as I don’t know how to get a faster version of this method.
Thank you so much for your time.
This is the method:
def minimax(move, depth, alpha, beta, maximizing_player)
return board.evaluate if depth.zero?
board.provisional(move, color) do
if maximizing_player
best_minimizing_evaluation = Float::INFINITY
board.generate_moves(:black).each do |possible_move|
evaluation = minimax(possible_move, depth - 1, alpha, beta, false)
best_minimizing_evaluation = [best_minimizing_evaluation, evaluation].min
beta = [beta, evaluation].min
break if beta <= alpha
end
best_minimizing_evaluation
else
best_maximizing_evaluation = -Float::INFINITY
board.generate_moves(:white).each do |possible_move|
evaluation = minimax(possible_move, depth - 1, alpha, beta, true)
best_maximizing_evaluation = [best_maximizing_evaluation, evaluation].max
alpha = [alpha, evaluation].max
break if beta <= alpha
end
best_maximizing_evaluation
end
end
end
With an initial depth value of 3, it takes between 15 and 50 seconds for the method to resolve and return the chosen move; this is a lot, and makes the game barely enjoyable. Changing the depth value to 2, the times are more reasonable, being about a third of the previous times, but I’d really like to keep the depth at 3. With a depth of 1, of course, it takes less than a second.
I realize that some improvements can be made, however, I don’t know how to:
Will a negamax version of this method significantly improve its performance?
I’m aware of the Tail Recursion Optimization, however, is it possible in this case? I don’t know if it is a tree-generating method like this.
I’ve been told that pre-sorting the moves somehow before they are evaluated can improve the performance in alpha-beta minimaxes, but, how can I sort the moves? I can’t sort them by the immediate outcome, because, for instance, sometimes it’s worth to sacrifice a piece to win a better position. I could sort them by best possible outcome in 2 turns... but then I'd be computing twice, once for the sorting of the moves, and once for the actual move evaluation.
It's implementing a transposition table worth it? I mean, there are tons of potential positions, and they easy can make a file really big. For example, in a day, the program generated a 100 MB text file, and I didn't notice a huge performance improvement, as the computer don't always makes the same moves, and neither do I. The different position for a chess game are innumerable, unlike in a game like Tic Tac Toe.
Thank you so much.
I was given a brain puzzle from lonpos.cc as a present. I was curius of how many different solutions there were, and I quite enjoy writing algorithms and code, so I started writing an application to brute force it.
The puzzle looks like this : http://www.lonpos.cc/images/LONPOSdb.jpg / http://cdn100.iofferphoto.com/img/item/191/498/944/u2t6.jpg
It's a board of 20x14 "points". And all puzzle pieces can be flipped and turned. I wrote an application where each piece (and the puzzle) is presented like this:
01010
00100
01110
01110
11111
01010
Now my application so far is reasonably simple.
It takes the list of pieces and a blank board, pops of piece #0
flips it in every direction, and for that piece tries to place it for every x and y coordinate. If it successfully places a piece it passes a copy of the new "board" with some pieces taken to a recursive function, and tries all combinations for their pieces.
Explained in pseudocode:
bruteForce(Board base, List pieces) {
for (Piece in pieces.pop, piece.pop.flip, piece.pop.flip2...) {
int x,y = 0;
if canplace(piece, x, y) {
Board newBoard = base.clone();
newBoard.placePiece(piece, x, y);
bruteForce(newBoard, pieces);
}
## increment x until x > width, then y
}
}
Now I'm trying to find out ways to make this quicker. Things I've thought of so far:
Making it solve in parallel - Implemented, now using 4 threads.
Sorting the pieces, and only trying to place the pieces that will fit in the x,y space we're trying to fit. (Aka if we're on the bottom row, and we only have 4 "points" from our position to the bottom, dont try the ones that are 8 high).
Not duplicating the board, instead using placePiece and removePiece or something like it.
Checking for "invalid" boards, aka if a piece is impossible to reach (boxed in completely).
Anyone have any creative ideas on how I can do this quicker? Or any way to mathematically calculate how many different combinations there are?
I don't see any obvious way to do things fast, but here are some tips that might help.
First off, if you ignore the bumps, you have a 6x4 grid to fill with 1x2 blocks. Each of the blocks has 6 positions where it can have a bump or a hole. Therefore, you're trying to find an arrangement of the blocks such that at each edge, a bump is matched with a hole. Also, you can represent the pieces much more efficiently using this information.
Next, I'd recommend trying all ways to place a block in a specific spot rather than all places to play a specific block anywhere. This will reduce the number of false trails you go down.
This looks like the Exact Cover Problem. You basically want to cover all fields on the board with your given pieces. I can recommend Dancing Links, published by Donald Knuth. In the paper you find a clear example for the pentomino problem which should give you a good idea of how it works.
You basically set up a system that keeps track of all possible ways to place a specific block on the board. By placing a block, you would cover a set of positions on the field. These positions can't be used to place any other blocks. All possibilities would then be erased from the problem setting before you place another block. The dancing links allows for fast backtracking and erasing of possibilities.
I'm reading about visual programming languages these days. So I've thought up two "paradigms". In both of them, you have one start point, and several end points.
Now, you could either begin at the start point or move in reverse from the end points (the order of end points is known).
Beginning from the start point feels weird, because you can have "splits" in the data flow. Say, if I have an interger, and this integer is needed by two functions simultaenously. Bad. I don't want to get into concurrent coding. Atleast not yet. Or should I?
Beginning at the end points feels much better. You start at the first end point. Check whatever is needed, and evaluate that. I believe this is the lazy evaluation. But the problem comes when you have multiple inputs. How do you decide the order in which to evaluate the inputs?
Can you point me to some articles/papers/something on the internet. Or mabye tell me a few keywords to look for?
If I get what you mean, using the same integer in two functions, is exactly that: you just use it twice, no need to bring concurrency in. If the 'implementation' you were thinking about destroyed input values, you could take a copy before using it.
int i = 2;
int j = fun1(i);
int k = fun2(i);
int res = fun3(j, k);
would become:
i = 2[A]
|
Clone[B]
/ \
/ \
/ \
i_1 i_2
| |
fun1[C] fun2[D]
| |
j k
\ /
\ /
\ /
fun3[E]
|
res
But there's no need of concurrency in order to evaluate the graph. You can just evaluate 'parallel' branches left to right (as indicated by the A-B-C-... labelling - see also here).
Top-down (aka from start to end), left-to-right feels more natural than bottom-up, provided bottom-up actually has a well defined meaning. Regarding the latter point, assuming you do have results for the program, you can't always compute the inputs: think about what happens when funXXX are not injective (for example fun1(x) = x*x) and thus not invertible.
I hope I'm not completely misinterpreting your train of thought.
Moving forward, what you want is the topological sort of your dependency graph - that is, an order in which to execute nodes such that you never execute a node before its dependencies. This assumes, naturally, that there are no cycles in your graph.
Moving backwards, what you're doing is recursively resolving the graph. Starting with the end node, for each dependency that is not yet calculated, you recursively invoke the procedure on that node, until all input values are evaluated. This has the advantage that you never process nodes that aren't required by a particular end state.
Which of the two approaches is best depends somewhat on what precisely you're doing.
Even describing this problem is hard, But I'll give it a go. I've been struggling with this for a few days and decided to ask here.
OK, so I'm trying to model "concepts", or "things" as I call them. Just concepts in general. It's to do with processing logic.
So, each "thing" is defined by it's relationship to other things. I store this as a set of 5 bits per relationship. A 'thing' could be like this:
class Thing {
char* Name;
HashTable<Thing*, int> Relationships;
}
So, I model "Things" like that. 5 bits per relationship. Each bit stands for one possible relationship. Like this: 1 equals, 2 inside, 3 outside, 4 contains, 5 overlaps. Having all 5 bits on means we totally don't know what the relationship is. Having 2 bits means we think the relationship could be one of two possibilities. Relationships start off as "unknown" (all 5 bits are true) and get more specific as time goes on.
So this is how I model increasing knowledge over time. Things start off in a fully unknown state, and can pass through partially known states, and can reach fully known states.
A little more background:
I try to add extra definition to my modelling of "concepts" (things), by using extra classes. Like this:
class ArrayDefinition {
Array<Thing> Items;
}
And my Thing class becomes like this:
class Thing {
char* Name;
HashTable<Thing*, int> Relationships;
ArrayDefinition* ArrayDef;
}
This "ArrayDef" doesn't HAVE to be used. It's just there to be used, if needed. Some "things" don't have arrays, some do. But all "things" have relationships.
I can process this "ArrayDefinition" to figure out the relationship between two things! For example, if X = [ A B C D E ] and Y = [ C D E ], my code can process these two arrays, and figure out that "Y inside X".
OK, so that's enough background. I've explained the core problem, avoiding my real code which has all sorts of distracting details.
Here's the problem:
The problem is making this not run ridiculously slow.
Imagine, there are 2000 "things". Let's say 1000 of these have array definitions. Now, that makes 500,000(ish) possible "array-pairs" that we need to compare against each other.
I hope I'm starting to finally make sense now. How to avoid processing them all against each other? I've already realised that if two "things" have a fully known relationship, there is no point in comparing their "array definitions", because that's just used to figure out extra detail on the relationship, but we have the exact relationship, so there's no point.
So... let's say only 500 of these "things with arrays" have unknown or partially known relationships. That still makes 250000(ish) possible "array-pairs" to compare!
Now... to me, the most obvious place to start, is realising that unless a relationship used to define two arrays changes (Becomes more specific), then there is no point processing this "array-pair".
For example, let's say I have these two arrays:
X = [ A B C D E ]
Y = [ Q W R T ]
now, if I say that T=R, that's very nice. But this does not affect the relationship between X and Y. So just because T's relationship to R has become known as "equal", whereas before it might have been fully unknown, this does not mean I need to compare X and Y again.
On the other hand, if I say "T outside E", this is a relationship between things used to define the two arrays. So saying that "T outside E" means I need to process X's array against Y's array.
I really don't want to have to compare 500,000 "array-pairs" just to process logic on 1000 arrays when almost nothing has changed between them!
So... my first attempt at simplifying this, was to keep a list of all the arrays that a thing is used to define.
Let's say I have 3 arrays:
A = [ X Y Z ]
B = [ X X X X ]
C = [ X Z X F ]
Well, X is used in 3 arrays. So, X could keep a list of all the arrays it is used inside of.
So, if I said "X inside Y", this could bring up a list of all the arrays that Y is used to define, and all the arrays X is used to define. Let's say X is used in 3 arrays, and Y is used in 1 array. From this, we can figure out that there are 2 "array-pairs" we need to compare (A vs B, and A vs C).
We can futher trim this list by checking if any of the array pairs already have fully known relationships.
The problem I have with this, is it STILL seems excessive.
Let's say X is a really common "thing". It's used in 10,000 arrays. And Y is a really common thing, used in 10,000 arrays.
I still end up with 100,000,000 array-pairs to compare. OK, so let's say I don't need to compare them all, actually, only 50 of them were partially known or totally unknown.
But... I still had to run over a list of 100,000,000 array-pairs, to figure out which of these was partially known. So it's still inefficient.
I'm really wondering if there is no efficient method of doing this. And if really all I can do is make a few effective "heuristicish" strategies. I haven't had too much luck coming up with good strategies yet.
I realise that this problem is highly specialised. And I realise that reading this long post may take too long. I'm just not sure how to shrink the post length or describe this in terms of more common problems.
If it helps... My best attempt to express this in common terms, is "how to compare only the lists that have been updated".
Anyone got any ideas? That would be great. If not... perhaps just me writing this out may help my thinking process.
The thing is, I just can't help but feel that there is some algorithm or approach that can make this problem run fast and efficient. I just don't know what that algorithm is.
Thanks all
In general, you won't be able to come up with a structure that is as-fast-as-possible for every operation. There are tradeoffs to be made.
This problem looks very similar to that of executing queries on a relational database - SELECT * WHERE .... You might consider looking there for inspiration.
I'm not sure I understand what you are doing completely (the purpose of ArrayDefinition is particularly hazy), but I think you should consider separating the modeling of the objects from their relationships. In other words, create a separate mapping from object to object for each relationship. If objects are represented by their integer index, you need only find an efficient way to represent integer to integer mappings.
I had a sleep and when I woke up, I had a new idea. It might work...
If each "thing" keeps a list of all the "array definitions" it is used to define.
class Thing {
char* Name;
HashTable<Thing*, int> Relationships;
ArrayDefinition* ArrayDef;
Set<ArrayDefinition*> UsedInTheseDefs;
}
class ArrayDefinition {
Array<Thing> Items;
Set<int> RelationModifiedTag;
}
And I keep a global list of all the "comparable array pairs".
And I also construct a global list, of all the "arrays that can be compared" (not in pairs, just one by one).
Then, everytime a relationship is changed, I can go over the list of "arrays definitions" I'm inside of, and add a little "tag" to it :)
So I can do something like this:
static CurrRel = 0;
CurrRel++; // the actual number doesn't matter, it's just used for matching
foreach(Arr in this->UsedInTheseDefs) {
Arr->RelationModifiedTag.Add( CurrRel );
}
foreach(Arr in other->UsedInTheseDefs) {
Arr->RelationModifiedTag.Add( CurrRel );
}
I altered both sides of the relationship. So if I did this: "A outside B", then I've added a "modifiedtag" to all the arrays A is used to define, and all the arrays B is used to define.
So, then I loop over my list of "comparable array-pairs". Each pair of course is two arrays, each one will have a "RelationModifiedTag" set.
So I check both RelationModifiedTag sets against each other, to see if they have any matching numbers. If they DO, then this means this array pair has a relationship that's just been altered! So... I can do my array comparison then.
It should work :)
It does require a bit of overhead, but the main thing is I think it scales well to larger data sets. For smaller datasets say only 10 arrays, a simpler more brute force approach could be used, just compare all array-pairs that don't have fully known relationship, and don't bother to keep track of what relationships have been altered.
There's further optimisations possible. But I won't go into those here, because it just distracts from the main algorithm, and they are kind of obvious. For example if I have two sets to compare, I should loop over the smaller set and check against the bigger set.
Apologies for having to read all this long text. And thanks for all the attempts to help.
Well, first of all, some vocabulary.
Design Pattern: Observer
It's a design pattern that allow objects to register themselves into others, and ask for notifications on events.
For example, each ThingWithArray could register itself in the Thing they managed, so that if the Thing is updated the ThingWithArray will get notified back.
Now, there is usually an unsubscribe method, meaning that as soon as the ThingWithArray no longer depends on some Thing because all the relations that use them have been used, then they could unsubscribe themselves, so as not to be notified of the changes any longer.
This way you only notify those which actually care about the update.
There is one point to take into account though: if you have recursive relationships, it might get hairy, and you'll need to come up with a way to avoid this.
Also, follow ergosys advise, and model relationships outside of the objects. Having 1 BIG class is usually the start of troubles... if you have difficulty cutting it into logical parts, it's that the problem is not clear for you, and you should ask help on how to model it... Once you've got a clear model, things usually fall into place a bit more easily.
From your own answer I deduce that the unknown relations are greatly over-numbered by known relationships. You could then keep track of the unknown relationships of each thing in a separate hash table/set. As a further optimization, instead of keeping track of all definitions that a thing is used in, keep track of which of these definitions have unknown relationships - which relationships can be affected. Then given a newly defined relationship between X and Y, take the affected definitions of one of them, and find the intersection of each of the unknown relations with the affected definitions of the other one. To keep the acceleration datastructure up to date, when a relationships becomes known, remove it from the unknown relationships and if no unknown relationships remain go over the array def and remove the thing from can-affect sets.
The datastructure would then look something like this:
class Thing {
char* Name;
HashTable<Thing*, int> Relationships;
Set<Thing*> UnknownRelationships;
ArrayDefinition* ArrayDef;
Set<Thing*> CanAffect; // Thing where this in ArrayDefinition and UnknownRelationships not empty
}
class ArrayDefinition {
Array<Thing> Items;
}
I was thinking earlier today about an idea for a small game and stumbled upon how to implement it. The idea is that the player can make a series of moves that cause a little effect, but if done in a specific sequence would cause a greater effect. So far so good, this I know how to do. Obviously, I had to make it be more complicated (because we love to make it more complicated), so I thought that there could be more than one possible path for the sequence that would both cause greater effects, albeit different ones. Also, part of some sequences could be the beggining of other sequences, or even whole sequences could be contained by other bigger sequences. Now I don't know for sure the best way to implement this. I had some ideas, though.
1) I could implement a circular n-linked list. But since the list of moves never end, I fear it might cause a stack overflow ™. The idea is that every node would have n children and upon receiving a command, it might lead you to one of his children or, if no children was available to such command, lead you back to the beggining. Upon arrival on any children, a couple of functions would be executed causing the small and big effect. This might, though, lead to a lot of duplicated nodes on the tree to cope up with all the possible sequences ending on that specific move with different effects, which might be a pain to maintain but I am not sure. I never tried something this complex on code, only theoretically. Does this algorithm exist and have a name? Is it a good idea?
2) I could implement a state machine. Then instead of wandering around a linked list, I'd have some giant nested switch that would call functions and update the machine state accordingly. Seems simpler to implement, but... well... doesn't seem fun... nor ellegant. Giant switchs always seem ugly to me, but would this work better?
3) Suggestions? I am good, but I am far inexperienced. The good thing of the coding field is that no matter how weird your problem is, someone solved it in the past, but you must know where to look. Someone might have a better idea than those I had, and I really wanted to hear suggestions.
I'm not absolutely completely sure that I understand exactly what you're saying, but as an analagous situation, say someone's inputting an endless stream of numbers on the keyboard. '117' is a magic sequence, '468' is another one, '411799' is another (which contains the first one).
So if the user enters:
55468411799
you want to fire 'magic events' at the *s:
55468*4117*99*
or something like that, right? If that's analagous to the problem you're talking about, then what about something like (Java-like pseudocode):
MagicSequence fireworks = new MagicSequence(new FireworksAction(), 1, 1, 7);
MagicSequence playMusic = new MagicSequence(new MusicAction(), 4, 6, 8);
MagicSequence fixUserADrink = new MagicSequence(new ManhattanAction(), 4, 1, 1, 7, 9, 9);
Collection<MagicSequence> sequences = ... all of the above ...;
while (true) {
int num = readNumberFromUser();
for (MagicSequence seq : sequences) {
seq.handleNumber(num);
}
}
while MagicSequence has something like:
Action action = ... populated from constructor ...;
int[] sequence = ... populated from constructor ...;
int position = 0;
public void handleNumber(int num) {
if (num == sequence[position]) {
// They've entered the next number in the sequence
position++;
if (position == sequence.length) {
// They've got it all!
action.fire();
position = 0; // Or disable this Sequence from accepting more numbers if it's a once-off
}
} else {
position = 0; // missed a number, start again!
}
}
You might want to implement a state machine anyway, but you don't have to hardcode state transitions.
Try to make a graph of states, where link between state A to state B will mean A can lead to B.
Then you can traverse graph at runtime to find where player goes.
Edit: You can define graph node as:
-state-id
-list of links to other states,
where every link defines:
-state-id
-precondition, a list of states what must be visited before going to this state
What you're describing sounds very similar to the technology tree in a game live Civilization.
I don't know how the Civ authors built theirs, but I'd be inclined to use a multigraph to represent possible 'moves' - there will be some you can start at with no 'experience', and once you're in them, there will be multiple paths through to the end.
Draw-out what potential options you can have at each stage of the game, and then draw lines going from some options to others.
That should give you a start on implementation, as graphs are [relatively] easy concepts to implement and utilize.
Sounds like a neural network. You could create one and train it to recognize the patterns that cause the various effects you are looking for.
What you're describing sounds somewhat similar to a dependency graph or a word graph. You might look into those.
#Cowan, #Javier: Nice idea, mind if I add to it?
Let the MagicSequence objects listen to the incoming stream of user input, that is notify them of the input (broadcast) and let each of them add the input to there internal input stream. This stream is cleared when the input is not the expected next input in the pattern that would have the MagicSequence fire its action. As soon as the pattern is completed, fire the action and clear the internal input stream.
Optimize this by only feeding input to the MagicSequences that are waiting for it. This could be done two ways:
You have an object that lets all MagicSequences connect with events that correspond with numbers in their patterns. MagicSequence(1,1,7) would add itself to got1 and got7, for example:
UserInput.got1 += MagicSequnece[i].SendMeInput;
You could optimize this such that after each input MagicSequences deregister from invalid events and register with valid ones.
create a small state machine for each effect that you'd want. at each user action, 'broadcast' it to all state machines. most of then won't care, but some will advance, or maybe go backwards. when one of them reaches it's goal, produce the desired effect.
to keep the code neat, don't hardcode the state machines, instead build a simple data structure that encodes the state graph: each state is a node with a list of interesting events, each one points to the next state's node. Each machine's state is simply a reference to the appropriate state node.
edit: It seems Cowan's advice is equivalent to this, but he optimises his state machines to express only simple sequences. seems enough for your specific problem, but more complex conditions could need a more general solution.