Where should rear point in a queue:
Place where new element WILL BE inserted.
Place where last element of the queue resides.
According to my research, I got both above cases as answers.
I'd say go with the TailPointer pointing towards the last element that was added instead of the empty slot where you'd add the new element. I have a few reasons for that:
To get the last element you can directly get the value at TailPointer which is more like the name. Instead of going with a TailPointer - 1.
In case you have an Array as a backing DataStore for your Queue, it'll be natural to check tailPointer == dataStore.Length - 1 (since 0 based indexing is most common)
Also you would be wrapping your data to initial indexes (the one's before the Head Pointer) in case you DeQueue your data. (refer this and this)
If no data is in the Queue, you can simply set the TailPointer to -1.
Related
In Reactivex.IO documentation it was stated that
You can ignore the final n items emitted by an Observable and attend only to those items that come before them, by modifying the Observable with the SkipLast operator.
and the diagram from http://reactivex.io/documentation/operators/skiplast.html
My Expectations: SkipLast will read the entire Observable till it meets the OnCompleted and then generates a new Observable with the timings as the Original one but skipping the Last ones.
My Doubt: How does SkipLast Operator know that "3" is that Last 2nd Item from the Observable? Without seeing the OnCompleted how can it tell the Last nth Item?
Thanks #PanagiotisKanavos, #akarnokd for Valuable comments.
It is internally implemented with a Queue of Fixed Size. Take the items from the Sequence and Enqueue them, when the Queue is full and Starts to Overflow Dequeue the item & Enqueue the latest value and Send to OnNext(dequeued_value), so when OnCompleted Reached you will not Send the Cached items and just call OnCompleted. By this last N cached items are Skipped.
From a source code if skipLast(N) is used, then N messages is kept in the this._ring array. As soon N+1 arrived first message is emitted, N+2 arrived => second messages is emitted etc.
In golang channels, the element pushed last is consumed last. But is there a way to push element to the "front" of the channel so that the element get a chance to get consumed out of turn? Assume elements 1,2,3,4,5,6,7,8 are added to the channel and element 4 failed to process (1,2,3 are processed successfully). In this case I want to push the element 4 again to the channel in such a way that it may get a chance to get processed before elements 5,6,7,8 and subsequent elements added (if they are not already pulled from the channel for processing). This can be easily achieved using blocking queues. But I don't want to use them.
But is there a way to push element to the "front" of the channel
No there is not.
No, a channel is strictly FIFO. You'll have to play with multiple channels if you want priority, or use some other data structure, like a heap: https://golang.org/pkg/container/heap/.
etcd allows clients to safely wait for changes of individual k/v nodes, by supplying a last known index of a node to the wait command. etcd also allows to wait ("recursively") for any changes to child nodes under a certain parent node.
Now, the problem is: is it possible to recursively wait on a parent node in such a way, as to guarantee that no child node changes are ever missed by the client? Parent node index is of no use in this case, as it would not change on child node modification.
If you're just starting up, presumably you have just retrieved the subtree you're watching. The reply has an etcd_index field. Use that as the starting point.
Otherwise, your wait contains the modification index of the change. Use that as a starting point for the next call.
You may have to increase one or two of these values to ensure that you don't get duplicate replies. I don't remember which of these I need to increment on purpose; the code needs tests which ensure that I get every change exactly once, so I adjust the values based on that.
I'm working on a project where I use Riak with Ripple, and I've stumbled on a problem.
For some reason I get duplicates when link-walking a structure of links. When I link walk using curl I don't get the duplicates as far as I can see.
The difference between my curl based link-walk
curl -v http://127.0.0.1:8098/riak/users/2306403e5177b4716da9df93b67300824aa2fd0e/_,projects,0/_,tasks,1
and my ruby ripple/riak-client based link walk
result = Riak::MapReduce.new(self.robject.bucket.client).
add(self.robject.bucket,self.key).
link(Riak::WalkSpec.new({:key => 'projects'})).
link(Riak::WalkSpec.new({:key => 'tasks', :bucket=>'tasks'})).
map("function(v){ if(!JSON.parse(v.values[0].data).completed) {return [v];} else { return [];} }", {:keep => true}).run
is as far as I can tell the map at the end.
However the result of the map/reduce contains several duplicates. I can't wrap my head around why. Now I've settled for removing the duplicates based on the key, but I wish that the riak result wouldn't contain duplicates, since it seems like waste to remove duplicates at the end.
I've tried the following:
Making sure there are no duplicates in the links sets of my ripple objects
Loading the data without the map reduce, but the link walk contains duplicate keys.
Any help is appreciated.
What you're running into here is an interesting side-effect/challenge of Map/Reduce queries.
M/R queries don't have any notion of read quorum values, and they necessarily have to hit every object (within the limitations of input filtering, of course) on every node.
Which means, when N > 1, the queries have to hit every copy of every object.
For example, let's say N=3, as per default. That means, for each written object, there are 3 copies, one each on 3 different nodes.
When you issue a read for an object (let's say with the default quorum value of R=2), the coordinating node (which received the read request from your client) contacts all 3 nodes (and potentially receives 3 different values, 3 different copies of the object).
It then checks to make sure that at least 2 of those copies have the same values (to satisfy the R=2 requirement), returns that agreed-upon value to the requesting client, and discards the other copies.
So, in regular operations (reads/writes, but also link walking), the coordinating node filters out the duplicates for you.
Map/Reduce queries don't have that luxury. They don't really have quorum values associated with them -- they are made to iterate over every (relevant) key and object on all the nodes. And because the M/R code runs on each individual node (close to the data) instead of just on the coordinating node, they can't really filter out any duplicates intrinsically. One of the things they're designed for, for example, is to update (or delete) all of the copies of the objects on all the nodes. So, each Map phase (in your case above) runs on every node, returns the matched 'completed' values for each copy, and ships the results back to the coordinating node to return to the client. And since it's very likely that your N>1, there's going to be duplicates in the result set.
Now, you can probably filter out duplicates explicitly, by writing code in the Reduce phase, to check if there's already a key present and reject duplicates if it is, etc.
But honestly, if I was in your situation, I would just filter out the duplicates in ruby on the client side, rather than mess with the reduce code.
Anyways, I hope that sheds some light on this mystery.
When I set timeToLiveSeconds="100" mean that the EhCache engine will reset all the cache or will reset only the elements that is living for 100 seconds ?
I've read the EhCache's documentation and it tends to be the first approach, by the way, I'm not totally sure about that:
timeToLiveSeconds This is an optional attribute.
Legal values are integers between 0 and Integer.MAX_VALUE.
It is the number of seconds that an Element should live since it was
created. Created means inserted into a cache using the Cache.put
method.
0 has a special meaning, which is not to check the Element for time to
live, i.e. it will live forever.
The default value is 0.
Thank you.
It will reset only the element. Checkout the source code. getExpirationTime() method belongs to Element class.
http://grepcode.com/file/repo1.maven.org/maven2/net.sf.ehcache/ehcache-core/2.5.0/net/sf/ehcache/Element.java#Element.getExpirationTime%28%29