I'm currently developing a card game using Actionscript 3, and I was wondering how to be sure that there at least one win possibility.
The game has similar gameplay to this one :
https://play.google.com/store/apps/details?id=com.gameduell.cleopatraspyramidnew&hl=en
https://itunes.apple.com/us/app/cleopatras-pyramid/id401141292?mt=8
Player has to play a card that matchs the previous or the next value of the presented cards.
I tried different methods, but still not satisfied.
What I wanted to know, is how to be sure that in the hidden cards, there is at least one winning game, and so the player will have the possibility to win.
thanks
Given all the cards are pre-placed, you can do the following:
Get a clear board. Pair up your set of cards, to get a set of pairs
Get one pair out of remaining set of pairs. If set is empty, provide filled board as the algorithm result.
Place both cards into your board to the next available positions, so that both places are available to pickup in the gameplay phase. If no places are available, this means you've placed previous pair(s) wrongly, so you have to retrack - pop state from stack, attempt another placement at that state.
Push state to stack.
Go to 2.
The trick is, you start with a board being won, and add a layer (a pair) so that removing this pair will lead to a winnable condition, by state sequence construction. Apparently, you can add random pairs to random available locations in your pyramid, and occasionally the random will be so that, for example, the last 2 places will not be both available to pick up during actual gameplay, and this is a situation to retract previous placement, maybe not just one. You can remedy this situation by giving depth value to grid spaces, and if max depth of free spaces equals the number of unplaced pairs, the deepest places are occupied instead of random.
If your game involves playing one card at a time, then an additional check should be performed at picking the next card - if the remaining set of cards becomes unconnected, e.g. you've picked all the 4's and there are both 3's and 5's left in unpicked set. This means you have to retract to select another card at previous stage. An algorithm can be like this:
Prepare empty grid, prepare set of cards to place.
Recursion entry point. If passed -1 (first time), select a full set of available cards. If not the first time, select a set of valid cards based on previously selected one (e.g. there was a 4, select all 3's, select all 5's, select all 4's if allowed). Shuffle the set.
If the set is empty, and all available cards set is also empty, return from recursion with true, otherwise return false.
Select next card from the set. If none remain, fall back out of recursion with failure.
Check if removal of current card discontinues the remaining set of unplaced cards. If yes, go to 3.
Select an empty available place in the grid, place selected card there.
Enter recursion (save state, go to 2) with current card's value, updated grid and reduced set of unplaced cards.
If recursion result is false, retract action in 5, and go to 3. If true, leave recursion with true.
Related
I want to build an algorithm that organizes calendar events to display positions.
Each event looks like this:
{
title: 'A Title',
start: aDate,
end: anotherDate,
position: aNumber
}
I want to achieve a layout similar to this
(A & B have position 0, C & D position 1 and E position 2) or any other combination, but not use more positions than necessary.
Can anyone suggest witch algorithm might do the trick of automatically assigning suitable positions to my events? (Name reference or pseudocode would be a lot of help)
My thoughts up till now are to keep track in the event object the other overlapping events and then somehow compare their positions / overlaps if any to get the number, but I can't quite figure it out.
Given several existing free lanes where you can place an event, any of them is an acceptable choice, in the sense that the total maximum number of required lanes will not be affected. Given no free lanes, there is only one choice: add a new lane.
Therefore, the problem is actually very simple: just place an event in the first free lane that you can find (or create a new one if none is currently free), and keep track of the lanes that are occupied and the times when they will be freed up.
This greedy approach could look as follows:
initialize a list of free lanes
for each event e,
1. check which occupied lanes are free for e.startTime
2. assign e.lane to a free lane, or add a new free lane if none empty
3. mark the e.lane as occupied until e.endTime is reached
Step 2 can stick to the lowest-number free lane (to yield a more top-compact representation), or to spread lanes out a bit (which may make aesthetic sense, although you will not know the total number of lanes required until after a 1st pass).
In any case, the algorithm only requires one pass, and minimal additional memory (keeping track of which lanes are occupied until what times).
I am looking to create a large list of items that allows for easy insertion of new items and for easily changing the position of items within that list. When updating the position of an item, I want to change as few fields as possible regarding the order of items.
After some research, I found that Jira's Lexorank algorithm fulfills all of these needs. Each story in Jira has a 'rank-field' containing a string which is built up of 3 parts: <bucket>|<rank>:<sub-rank>. (I don't know whether these parts have actual names, this is what I will call them for ease of reference)
Examples of valid rank-fields:
0|vmis7l:hl4
0|i000w8:
0|003fhy:zzzzzzzzzzzw68bj
When dragging a card above 0|vmis7l:hl4, the new card will receive rank 0|vmis7l:hl2, which means that only the rank-field for this new card needs to be updated while the entire list can always be sorted on this rank-field. This is rather clever, and I can't imagine that Lexorank is the only algorithm to use this.
Is there a name for this method of sorting used in the sub-rank?
My question is related to the creation of new cards in Jira. Each new card starts with an empty sub-rank, and the rank is always chosen such that the new card is located at the bottom of the list. I've created a bunch of new stories just to see how the rank would change, and it seems that the rank is always incremented by 8 (in base-36).
Does anyone know more specifically how the rank for new cards is generated? Why is it incremented by 8?
I can only imagine that after some time (270 million cards) there are no more ranks to generate, and the system needs to recalculate the rank-field of all cards to make room for additional ranks.
Are there other triggers that require recalculation of all rank-fields?
I suppose the bucket plays a role in this recalculation. I would like to know how?
We are talking about a special kind of indexing here. This is not sorting; it is just preparing items to end up in a certain order in case someone happens to sort them (by whatever sorting algorithm). I know that variants of this kind of indexing have been used in libraries for decades, maybe centuries, to ensure that books belonging together but lacking a common title end up next to each other in the shelves, but I have never heard of a name for it.
The 8 is probably chosen wisely as a compromise, maybe even by analyzing typical use cases. Consider this: If you choose a small increment, e. g. 1, then all tickets will have ranks like [a, b, c, …]. This will be great if you create a lot of tickets (up to 26) in the correct order because then your rank fields keep small (one letter). But as soon as you move a ticket between two other tickets, you will have to add a letter: [a, b] plus a new ticket between them: [a, an, b]. If you expect to have this a lot, you better leave gaps between the ranks: [a, i, q, …], then an additional ticket can get a single letter as well: [a, e, i, q, …]. But of course if you now create lots of tickets in the correct order right in the beginning, you quickly run out of letters: [a, i, q, y, z, za, zi, zq, …]. The 8 probably is a good value which allows for enough gaps between the tickets without increasing the need for many letters too soon. Keep in mind that other scenarios (maybe not Jira tickets which are created manually) might make other values more reasonable.
You are right, the rank fields get recalculated now and then, Lexorank calls this "balancing". Basically, balancing takes place in one of three occasions: ① The ranks are exhausted (largest value reached), ② the ranks are due to user-reranking of tickets too close together ([a, b, i] and something is supposed to go in between a and b), and ③ a balancing is triggered manually in the management page. (Actually, according to the presentation, Lexorank allows for up to three letter ranks, so "too close together" can be something like aaa and aab but the idea is the same.)
The <bucket> part of the rank is increased during balancing, so a messy [0|a, 0|an, 0|b] can become a nice and clean [1|a, 1|i, 1|q] again. The brownbag presentation about Lexorank (as linked by #dandoen in the comments) mentions a round-robin use of <buckets>, so instead of a constant increment (0→1→2→3→…) a 2 is increased modulo 3, so it will turn back to 0 after the 2 (0→1→2→0→…). When comparing the ranks, the sorting algorithm can consider a 0 "greater" than a 2 (it will not be purely lexicographical then, admitted). If now the balancing algorithm works backwards (reorder the last ticket first), this will keep the sorting order intact all the time. (This is just a side aspect, that's why I keep the explanation small, but if this is interesting, ask, and I will elaborate on this.)
Sidenote: Lexorank also keeps track of minimum and maximum values of the ranks. For the functioning of the algorithm itself, this is not necessary.
I run a Java game server where I need to efficiently rank players in various ways. For example, by score, money, games won, and other achievements. This is so I can recognize the top 25 players in a given category to apply medals to those players, and dynamically update them as the rankings change. Performance is a high priority.
Note that this cannot easily be done in the database only, as the ranks will come from different sources of data and different database tables, so my hope is to handle this all in memory, and call methods on the ranked list when a value needs to be updated. Also, potentially many users can tie for the same rank.
For example, let's say I have a million players in the database. A given player might earn some extra points and instantly move from 21,305th place to 23rd place, and then later drop back off the top 25 list. I need a way to handle this efficiently. I imagine that some kind of doubly-linked list would be used, but am unsure of how to handle quickly jumping many spots in the list without traversing it one at a time to find the correct new ranking. The fact that players can tie complicates things a little bit, as each element in the ranked list can have multiple users.
How would you handle this in Java?
I don't know whether there is library that may help you, but I think you can maintain a minimum heap in the memory. When a player's point updates, you can compare this to the root of the heap, if less than,do nothing.else adjust the heap.
That means, you can maintain a minimum heap that has 25 nodes which are the highest 25 of all the players in one category.
Forget linked list. It allows fast insertions, but no efficient searching, so it's of no use.
Use the following data
double threshold
ArrayList<Player> top;
ArrayList<Player> others; (3)
and manage the following properties
each player in top has a score greater or equal to threshold
each player in others has a score lower than threshold
top is sorted
top.size() >= 25
top.size() < 25 + N where N is some arbitrary limit (e.g., 50)
Whenever some player raises it score, do the following:
if they're in top, sort top (*)
if they're in others, check if their score promotes them to top
if so, remove them from others, insert in top, and sort top
if top grew too big, move the n/2 worst players from top to others and update threshold
Whenever some player lowers it score, do the following:
- if they're in others, do nothing
- if they're in top, check if their new score allows them to stay in top
- if so, sort top (1)
- otherwise, demote them to bottom, and check if top got too small
- if so, determine an appropriate new threshold and move all corresponding players to top. (2)
(1) Sorting top is cheap as it's small. Moreover, TimSort (i.e., the algorithm behind Arrays.sort(Object[])) works very well on partially sorted sequences. Instead of sorting, you can simply remember that top is unsorted and sort it later when needed.
(2) Determining a proper threshold can be expensive and so can be moving the players. That's why only N/2 player get moved away from it when it grows too big. This leaves some spare players and makes this case pretty improbable assuming that players rarely lose score.
EDIT
For managing the objects, you also need to be able to find them in the lists. Either add a corresponding field to Player or use a TObjectIntHashMap.
EDIT 2
(3) When removing an element from the middle of others, simply replace the element by the last one and shorten the list by one. You can do it as the order doesn't matter and you must do it because of speed. (4)
(4) The whole others list needn't be actually stored anywhere. All you need is a possibility to iterate all the players not contained in top. This can be done by using an additional Set or by simply iterating though all the players and skipping those scoring above threshold.
FINAL RECOMMENDATIONS
Forget the others list (unless I'm overlooking something, you won't need it).
I guess you will need no TObjectIntHashMap either.
Use a list top and a boolean isTopSorted, which gets cleared whenever a top score changes or a player gets promoted to top (simple condition: oldScore >= threshold | newScore >= threshold).
For handling ties, make top contain at least 25 differently scored players. You can check this condition easily when printing the top players.
I assume you may use plenty of memory to do that or memory is not a concern for you. Now as you want only the top 25 entries for any category, I would suggest the following:
Have a HashSet of Player objects. Player objects have the info like name, games won, money etc.
Now have a HashMap of category name vs TreeSet of top 25 player objects in that category. The category name may be a checksum of some columns say gamewon, money, achievent etc.
HashMap<String /*for category name */, TreeSet /*Sort based on the criteria */> ......
Whenever you update a player object, you update the common HashSet first and then check if the player object is a candidate for top 25 entries in any of the categories. If it is a candidate, some player object unfortunately may lose their ranking and hence may get kicked out of the corresponding treeset.
>> if you make the TreeSet sorted by the score, it'll break whenever the score changes (and the player will not be found in it
Correct. Now I got the point :) . So, I will do the following to mitigate the problem. The player object will have a field that indicate whether it is already in some categories, basically a set of categories it is already in. While updating a player object, we have to check if the player is already in some categories, if it is in some categories already, we will rearrange the corresponding treeset first; i.e. remove the player object and readjust the score and add it back to the treeset. Whenever a player object is kicked out of a category, we remove the category from the field which is holding a set of categories the player is in
Now, what you do if the look-up is done with a brand new search criteria (means the top 25 is not computed for this criteria already) ? ------
Traverse the HashMap and build the top entries for this category from "scratch". This will be expensive operation, just like indexing something afresh.
Okay, I'm gonna be as detailed as possible here.
Imagine the user goes through a set of 'options' he can choose. Every time he chooses, he get, say, 4 different options. There are many more options that can appear in those 4 'slots'. Each of those has a certain definite and known probability of appearing. Not all options are equally probable to appear, and some options require others to have already been selected previously - in a complex interdependence tree. (this I have already defined)
When the user chooses one of the 4, he is presented another choice of 4 options. The pool of options is defined again and can depend on what the user has chosen previously.
Among all possible 'options' that can ever appear, there are a certain select few which are special, call them KEY options.
When the program starts, the user is presented the first 4 options. For every one of those 4, the program needs to compute the total probability that the user will 'achieve' all the KEY options in a period of (variable) N choices.
e.g. if there are 4 options altogether the probability of achieving any one of them is exactly 1 since all of them appear right at the beginning.
If anyone can advise me as to what logic i should start with, I'd be very grateful.
I was thinking of counting all possible choice sequences, and counting the ones resulting in KEY options being chosen within N 'steps', but the problem is the probability is not uniform for all of them to appear, and also the pool of options changes as the user chooses and accumulates his options.
I'm having difficulty implementing the well defined probabilities and dependencies of the options into an algorithm that can give sensible total probability. So the user knows each time which of the 4 puts him in the best position to eventually acquire the KEY options.
Any ideas?
EDIT:
here's an example:
say there are 7 options in the pool. option1, ..., option7
option7 requires option6; option6 requires option4 and option5;
option1 thru 5 dont require anything and can appear immediately, with respective probabilities option1.p, ..., option5.p;
the KEY option is, say, option7;
user gets 4 randomly (but weighted) chosen options among 1-5, and the program needs to say something like:
"if you choose (first), you have ##% chance of getting option7 in at most N tries." analogous for the other 3 options.
naturally, for some low N it is impossible to get option7, and for some large N it is certain. N can be chosen but is fixed.
EDIT: So, the point here is NOT the user chooses randomly. Point is - the program suggests which option to choose, as to maximize the probability that eventually, after N steps, the user will be offered all key options.
For the above example; say we choose N = 4. so the program needs to tell us which of the first 4 options that appeared (any 4 among option1-5), which one, when chosen, yields the best chance of obtaining option7. since for option7 you need option6, and for that you need option4 and option5, it is clear that you MUST select either option4 or option5 on the first set of choices. one of them is certain to appear, of course.
Let's say we get this for the first choice {option3, option5, option2, option4}. The program then says:
if you chose option3, you'll never get option7 in 4 steps. p = 0;
if you chose option5, you might get option7, p=....;
... option2, p = 0;
... option4, p = ...;
Whatever we choose, for the next 4 options, the p's are re calculated. Clearly, if we chose option3 or option2, every further choice has exactly 0 probability of getting us to option7. But for option4 and option5, p > 0;
Is it clearer now? I don't know how to getting these probabilities p.
This sounds like a moderately fiddly Markov chain type problem. Create a node for every state; a state has no history, and is just dependent on the possible paths out of it (each weighted with some probability). You put a probability on each node, the chance that the user is in that state, so, for the first step, there will be a 1 his starting node, 0 everywhere else. Then, according to which nodes are adjacent and the chances of getting to them, you iterate to the next step by updating the probabilities on each vertex. So, you can calculate easily which states the user could land on in, say, 15 steps, and the associated probabilities. If you are interested in asymptotic behaviour (what would happen if he could play forever), you make a big pile of linear simultaneous equations and just solve them directly or using some tricks if your tree or graph has a neat form. You often end up with cyclical solutions, where the user could get stuck in a loop, and so on.
If you think the user selects the options at random, and he is always presented the same distribution of options at a node, you model this as a random walk on a graph. There was a recent nice post on calculating terminating probabilities of a particular random walks on the mathematica blog.
Regardless of the layout being used for the tiles, is there any good way to divvy out the tiles so that you can guarantee the user that, at the beginning of the game, there exists at least one path to completing the puzzle and winning the game?
Obviously, depending on the user's moves, they can cut themselves off from winning. I just want to be able to always tell the user that the puzzle is winnable if they play well.
If you randomly place tiles at the beginning of the game, it's possible that the user could make a few moves and not be able to do any more. The knowledge that a puzzle is at least solvable should make it more fun to play.
Place all the tiles in reverse (ie layout out the board starting in the middle, working out)
To tease the player further, you could do it visibly but at very high speed.
Play the game in reverse.
Randomly lay out pieces pair by pair, in places where you could slide them into the heap. You'll need a way to know where you're allowed to place pieces in order to end up with a heap that matches some preset pattern, but you'd need that anyway.
I know this is an old question, but I came across this when solving the problem myself. None of the answers here are quite perfect, and several of them have complicated caveats or will break on pathological layouts. Here is my solution:
Solve the board (forward, not backward) with unmarked tiles. Remove two free tiles at a time. Push each pair you remove onto a "matched pair" stack. Often, this is all you need to do.
If you run into a dead end (numFreeTiles == 1), just reset your generator :) I have found I usually don't hit dead ends, and have so far have a max retry count of 3 for the 10-or-so layouts I have tried. Once I hit 8 retries, I give up and just randomly assign the rest of the tiles. This allows me to use the same generator for both setting up the board, and the shuffle feature, even if the player screwed up and made a 100% unsolvable state.
Another solution when you hit a dead end is to back out (pop off the stack, replacing tiles on the board) until you can take a different path. Take a different path by making sure you match pairs that will remove the original blocking tile.
Unfortunately, depending on the board, this may loop forever. If you end up removing a pair that resembles a "no outlet" road, where all subsequent "roads" are a dead end, and there are multiple dead ends, your algorithm will never complete. I don't know if it is possible to design a board where this would be the case, but if so, there is still a solution.
To solve that bigger problem, treat each possible board state as a node in a DAG, with each selected pair being an edge on that graph. Do a random traversal, until you find a leaf node at depth 72. Keep track of your traversal history so that you never repeat a descent.
Since dead ends are more rare than first-try solutions in the layouts I have used, what immediately comes to mind is a hybrid solution. First try to solve it with minimal memory (store selected pairs on your stack). Once you've hit the first dead end, degrade to doing full marking/edge generation when visiting each node (lazy evaluation where possible).
I've done very little study of graph theory, though, so maybe there's a better solution to the DAG random traversal/search problem :)
Edit: You actually could use any of my solutions w/ generating the board in reverse, ala the Oct 13th 2008 post. You still have the same caveats, because you can still end up with dead ends. Generating a board in reverse has more complicated rules, though. E.g, you are guaranteed to fail your setup if you don't start at least SOME of your rows w/ the first piece in the middle, such as in a layout w/ 1 long row. Picking a completely random (legal) first move in a forward-solving generator is more likely to lead to a solvable board.
The only thing I've been able to come up with is to place the tiles down in matching pairs as kind of a reverse Mahjong Solitaire game. So, at any point during the tile placement, the board should look like it's in the middle of a real game (ie no tiles floating 3 layers up above other tiles).
If the tiles are place in matching pairs in a reverse game, it should always result in at least one forward path to solve the game.
I'd love to hear other ideas.
I believe the best answer has already been pushed up: creating a set by solving it "in reverse" - i.e. starting with a blank board, then adding a pair somewhere, add another pair in a solvable position, and so on...
If you a prefer "Big Bang" approach (generating the whole set randomly at the beginning), are a very macho developer or just feel masochistic today, you could represent all the pairs you can take out from the given set and how they depend on each other via a directed graph.
From there, you'd only have to get the transitive closure of that set and determine if there's at least one path from at least one of the initial legal pairs that leads to the desired end (no tile pairs left).
Implementing this solution is left as an exercise to the reader :D
Here are rules i used in my implementation.
When buildingheap, for each fret in a pair separately, find a cells (places), which are:
has all cells at lower levels already filled
place for second fret does not block first, considering if first fret already put onboard
both places are "at edges" of already built heap:
EITHER has at least one neighbour at left or right side
OR it is first fret in a row (all cells at right and left are recursively free)
These rules does not guarantee a build will always successful - it sometimes leave last 2 free cells self-blocking, and build should be retried (or at least last few frets)
In practice, "turtle" built in no more then 6 retries.
Most of existed games seems to restrict putting first ("first on row") frets somewhere in a middle. This come up with more convenient configurations, when there are no frets at edges of very long rows, staying up until last player moves. However, "middle" is different for different configurations.
Good luck :)
P.S.
If you've found algo that build solvable heap in one turn - please let me know.
You have 144 tiles in the game, each of the 144 tiles has a block list..
(top tile on stack has an empty block list)
All valid moves require that their "current__vertical_Block_list" be empty.. this can be a 144x144 matrix so 20k of memory plus a LEFT and RIGHT block list, also 20 k each.
Generate a valid move table from (remaning_tiles) AND ((empty CURRENT VERTICAL BLOCK LIST) and ((empty CURRENT LEFT BLOCK LIST) OR (empty CURRENT RIGHT BLOCK LIST)))
Pick 2 random tiles from the valid move table, record them
Update the (current tables Vert, left and right), record the Tiles removed to a stack
Now we have a list of moves that constitute a valid game. Assign matching tile types to each of the 72 moves.
for challenging games, track when each tile becomes available. find sets that have are (early early early late) and (late late late early) since it's blank, you find 1 EE 1 LL and 2 LE blocks.. of the 2 LE block, find an EARLY that blocks ANY other EARLY that (except rightblocking a left side piece)
Once youve got a valid game play around with the ordering.
Solitaire? Just a guess, but I would assume that your computer would need to beat the game(or close to it) to determine this.
Another option might be to have several preset layouts(that allow winning, mixed in with your current level.
To some degree you could try making sure that one of the 4 tiles is no more than X layers below another X.
Most games I see have the shuffle command for when someone gets stuck.
I would try a mix of things and see what works best.