When an element is inserted in a queue, REAR = REAR + 1. When an element is deleted from the queue, FRONT = FRONT + 1 when queues are implemented using arrays.
Now, initially, both FRONT = REAR = -1 indicating the queue is empty. When the first element is added, FRONT = REAR = 0 (assuming array from 0 to n-1).
Now, if we assume a condition where FRONT = 0 and REAR = n-1 implying the queue is full. When a few elements are removed the FRONT pointer changes. Let us say FRONT = 5 and REAR = 10. Hence, array locations 0 to 4 are free.
When I wish to add an element now, I add at the location 0 and FRONT points to it. But the locations 1, 2, 3 and 4 are free.
But, when the next time I try to insert an element, the compiler will throw an error saying the queue is full. Since FRONT = 0 and REAR = n-1. How do I insert at the remaining locations and also understand this queuing arithmetic better?
I would also like to understand how FRONT = REAR + 1 acts as a condition for checking if the queue is full?
You want to think circularly here in terms of relative, circular ranges instead of absolute, linear ones. So you don't want to get too hung up on the absolute indices/addresses of FRONT and REAR. They're relative to each other, and you can use modulo arithmetic to start getting back to the beginning of the array like Pac-Man when he goes off the side of the screen. It can be useful when you're drawing these things out to literally draw your array as a circle on your whiteboard.
When I wish to add an element now, I add at the location 0 and FRONT points to it. But the locations 1, 2, 3 and 4 are free.
I think here you got it backwards a bit. According to your logic, insertions advance the REAR, not FRONT. In such a case, REAR would be at 0, and FRONT would still be at 5. If you push again, REAR=1 and you'd overwrite the first index, and FRONT would still be 5.
If N=3 and FRONT=2 and REAR=2, we have one element in the queue after pushing and popping a lot. When you push (enqueue), we set: REAR=(REAR+1)%N making FRONT=2, REAR=0 giving us two elements. If we push again, FRONT=2, REAR=1 giving us 3 elements, and the queue is full.
Visually:
R
[..x]
F
R
[x.x]
F
R
[xxx]
F
... and now we're full. A queue is full if the next circular index from the REAR is the FRONT. In the case where FRONT=2, REAR=1, we can see that (REAR+1)%N == FRONT, so it's full.
If we popped (dequeued) at this point at this point, we would set FRONT=(FRONT+1)%N and it would look like this:
R
[xx.]
F
I would also like to understand how FRONT = REAR + 1 acts as a condition for checking if the queue is full?
This doesn't suffice when you use this kind of circular indexing. We need a slight augmentation: the queue is full when FRONT == (REAR+1)%N. We need that modulo arithmetic to handle those "wrap around to the other side" cases.
Related
I'm trying to code AI for a game somewhat similar to Tic-Tac-Toe. You can see its rules here.
The min-max algorithm and analysis function I'm using can be found here
The way I've tried so far:
I've built some patterns which will be good for the current player. (in Python)
e.g. my_pattern = " ".join(str(x) for x in [piece, None, piece, piece, None])
I'm matching such patterns with all the 6 possible orientations on the hexagonal gameboard for every piece (not for blank spaces). To be precise, matching my_pattern with 6 different arrays (each array represents one of 6 different orientations).
Now, What should this analysis function actually calculate?
The score of entire state of board?
The score of the last move made on board?
If someone can accurately describe the purpose of Analysis function, that would be great.
The analysis function represents the current state of board. It may/ may not include the last move, any of the previous moves or the order of moves to reach a board position. It should also consider whose turn it is to play.
What I mean is the same board can be good/bad for white/black depending on whose turn it is. (Called the situation of zugzwang in chess).
Also, the same board can be reached in a variety of move sequences, hence, it depends on the type of game whether you want to include that in the analysis or not. (High level chess engines surely include order of moves, though not for calculating current board, but for further analysis on a possibility of reaching that position). In this game however, I don't think there is any need of including last or any of the previous moves (order) for your analysis function.
EDIT:
An example of analysis function:
value = 10000*W(4) - 10000*W(3) + 200*W(2.1) + 200*W(1.2) + 100*W(2) + 100*W(1.1) + 2*W(1e) + 10*W(1m) + 30*W(1c) - (10000*B(4) - 10000*B(3) + 200*B(2.1) + 200*B(1.2) + 100*B(2) + 100*B(1.1) + 2*B(1e) + 10*B(1m) + 30*B(1c))
where:
W = white
B = black pieces
4 = made line of 4 pieces
3 = made line of 3 pieces
2 = made line of 2 pieces having possibility of getting extended to 4 from atleast one side
. = blank (ie, 1.2 = W.WW on the board)
1.1 = Piece|Blank|Piece and possibility of extending to 4 from atleast one side
e|m|c = edge|middle|center of board, and possibility of extending to 4 from either sides
The positive result of this analysis function would mean white is better, 0 indicates balanced board and negative value means black has advantageous position. You can change the weights owing to the result of tests you will execute. However, finding all possible combinations is exhaustive task, but the game is such :)
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 9 years ago.
Improve this question
I came across this question and thought of asking what is the best possible way to achieve this.
Given a FIFO queue. you can assume that queue contains integers. Every time an insertion or deletion happens, a new version of the queue is created. At any time, you have to print(whole content of the queue) any older version of the queue with minimal time and space complexity.
This is assuming you meant a FIFO queue and not some other kind of queue, like a priority queue.
Store it in an array and keep two pointer variables, one to its head and another to its tail.
For example:
insert 3 2 5 9 - version 1
q = [3 2 5 9]
^ ^
delete, delete - version 2
q = [3 2 5 9]
^ ^
insert 6 3 4 - version 3
q = [3 2 5 9 6 3 4]
^ ^
To print a version, you just need to store two values for each version: where the pointers were. Printing is then linear in the size of the queue at that version.
The vector can grow big, but you have to store every element there ever was in it if you want to be able to print any version.
You can also use a linked list to avoid array resizing if you consider that a problem. Just make sure not to remove a node from memory when deleting.
Your problem is to make the queue a partially persistent data structure.
Partially persistent means that you can query any version, but you only can make updates in the most recent version.
A couple years ago I've given a speech about making any pointer data structure persistent. It was based on "Making data structures persistent" by Driscoll, Sarnak, Sleator and Tarjan.
Clearly, any queue can be implemented as a linked data structure. If you want the simplest practical version, you may be interested in method called "The Fat Node Method" which is described on page 91 in the above PDF.
The idea is to store in every node several pointers to the next elements corresponding to different versions of the queue. Each pointer has assigned a version number called timestamp.
For every insert or delete operation, you update pointers only in nodes touched by the update operation.
For lookup operation in the i-th version of the queue, you simply follow the pointers with the largest timestamp not exceeding i. You can find the pointer to follow using the binary search.
In the given PDF there is also a more complex, but also even more efficient method called "The Node-Copying Method".
There are many possible solutions. Here is one with all operations guaranteed O(log(n)) and normal operations an amortized O(log(log(n)).
Keep an operation counter. Store the items in a skip list (see http://en.wikipedia.org/wiki/Skip_list for a definition of that) based on the order of the insertion operation. When an element is removed, fill in the id of the removal operation. For efficiency of access, keep a pair of pointers to the current head and current tail.
To insert an element, add it to the current tail. To return an element, return it to the current head. To return a past state, search the skip list for the then head, then start walking the list until you read the then tail.
The log(n) operations here are finding the past head, and (very occasionally) inserting a new head that happens to be a high node in the skip list.
Now lets us assume that in a FIFO queue, the head pointer is at the beginning of the array, and then the tail pointer is at the end of the current insertion. Then by storing the current tail position pointer value in a variable which is used as the head pointer position during future insertion and tail position again takes the end of that insertion. This way just by using a single variable and with only insertion taking place, previous versions can be printed from the beginning of the array to the tail pointer.
insert 3 2 1 4
a=[3 2 1 4] --version 1
^ ^
b e
p = e;
insert 5 7 8
a=[3 2 1 4 5 7 8] --version 2
^ ^
b e
here version 1 = position 0 to position p = [3 2 1 4]
p = e;
delete 3
a=[2 1 4 5 7 8] --version 3
^ ^
b e
here version 2 = position 0 to position p =[2 1 4 7 8]
where
b = beginning
e = end
Hence by using a single variable to hold previous version tail position and assuming the beginning position to be always 0 , previous versions can be easily printed.
I've been struggling with this one for a while and am not able to come up with anything. Any pointers would be really appreciated.
The problem is: given the language of all DFAs that receive only words of even-length, prove whether it is in P or not.
I've considered making a turing machine that goes over the given DFA in something like BFS/Dijkstra's algorithm in order to find all the paths from the starting state to the accepting one, but have no idea how to handle loops?
Thanks!
I think it's in P, at worst quadratic. Each state of the DFA can have four parity states
unvisited -- state 0
known to be reachable in an odd number of steps -- state 1
known to be reachable in an even number of steps -- state 2
known to be reachable in both, odd and even numbers of steps -- state 3
Mark all states as unvisited, put the starting state in a queue (FIFO, priority, whatever), set its parity state to 2.
child_parity(n)
switch(n)
case 0: error
case 1: return 2
case 2: return 1
case 3: return 3
while(queue not empty)
dfa_state <- queue
step_parity = child_parity(dfa_state.parity_state)
for next_state in dfa_state.children
old_parity = next_state.parity_state
next_state.parity_state |= step_parity
if old_parity != next_state.parity_state // we have learnt something new
add next_state to queue // remove duplicates if applicable
for as in accept_states
if as.parity_state & 1 == 1
return false
return true
Unless I'm overlooking something, each DFA state is treated at most twice, each time checking at most size children for required action.
It would seem this only requires two states.
Your entry state would be empty string, and would also be an accept state. Adding anythign to the string would move it to the next state, which we can call the 'odd' state, and not make it an accept state. Adding another string puts us back to the original state.
I guess I'm not sure on the terminology anymore of whether a language is in P or not, so if you gave me a definition there I could tell you if this fits it, but this is one of the simplest DFA's around...
Assuming the tree is balanced, how much stack space will the routine use for a tree of 1,000,000 elements?
void printTree(const Node *node) {
char buffer[1000];
if(node) {
printTree(node->left);
getNodeAsString(node, buffer);
puts(buffer);
printTree(node->right);
}
}
This was one of the algo questions in "The Pragmatic Programmer" where the answer was 21 buffers needed (lg(1m) ~= 20 and with the additional 1 at very top)
But I am thinking that it requires more than 1 buffer at levels lower than top level, due to the 2 calls to itself for left and right node. Is there something I missed?
*Sorry, but this is really not a homework. Don't see this on the booksite's errata.
First the left node call is made, then that call returns (and so its stack is available for re-use), then there's a bit of work, then the right node call is made.
So it's true that there are two buffers at the next level down, but those two buffers are required consecutively, not concurrently. So you only need to count one buffer in the high-water-mark stack usage. What matters is how deep the function recurses, not how many times in total the function is called.
This assuming of course that the code is written in a language similar to C, and that the C implementation uses a stack for automatic variables (I've yet to see one that doesn't), blah blah.
The first call will recurse all the way to the leaf node, then return. Then the second call will start -- but by the time the second call takes place, all activation records from the first call will have been cleared off the stack. IOW, there will only be data from one of those on the stack at any given time.
When using a Visual Basic two-dimensional array, which index varies fastest? In other words, when filling in an array, should I write...
For i = 1 To 30
For j = 1 To 30
myarray (i,j) = something
Next
Next
or
For i = 1 To 30
For j = 1 To 30
myarray (j, i) = something
Next
Next
(or alternatively does it make very much difference)?
Column major. VB6 uses COM SAFEARRAYs and lays them out in column-major order. The fastest access is like this (although it won't matter if you only have 30x30 elements).
For i = 1 To 30
For j = 1 To 30
myarray (j, i) = something
Next
Next
If you really want to speed up your array processing, consider the tips in Advanced Visual Basic by Matt Curland, which shows you how to poke around inside the underlying SAFEARRAY structures.
For instance accessing a 2D SAFEARRAY is considerably slower than accessing a 1D SAFEARRAY, so in order to set all array entries to the same value it is quicker to bypass VB6's SAFEARRAY descriptor and temporarily make one of your own. Page 33.
You should also consider turning on "Remove array bounds checks" in the project properties compile options.
I don't know if (or where) this is specified. It might be left as 'implementation defined'.
But I would expect the first index to be the 'lower' dimension, ie the big chunks, and the following index positions to be ever more fine-grained.
Edit: Seems I was wrong. VB6 uses a Column-first approach.
Does it make much difference?
You would have to measure but using the lower higher dimension for the outer loop would allow the compiler to generate faster code and could make better use of the processor cache (locality). But with a size=30 I wouldn't expect much difference.