Does Consul KV supports reading multiple keys? Similar to Redis's MGET.
You can use KV.Keys and pass in partial key locations, then enumerate the response to get/delete the KVpairs.
Related
If given a consul KV key a/key, where there are multiple agent server instances running, what happens if:
Two requests A (set value to val-a) and B (set value to val-b) are made to the create key endpoint without making use of the parameters cas or acquire in order to update the same key a/key:
If A and B are made in parallel can the key's value become corrupted?
Or if A comes slightly before B can the final value still become val-a?
The data will not be corrupted if Consul receives two write requests at the same time. The write requests would be processed serially by the leader, so the value of a/key would become either val-a or val-b, whichever is processed last.
You can find details on how Consul writes data in Consul's Consensus Protocol documentation.
In the same way KStream and KTable#toStream() allow calling process or transform and thus enable inspecting the record headers, is there a way to achieve the same with GlobalKTable. Basically, I am looking for a way to inspect the record headers in the Kafka topic when consuming it as a GlobalKTable. Thank you!
Maybe, you could use #addGlobalStore instead?
Note thought, that the "global processor" should never modify the data but put() the key-value-pair (and maybe timestamp) unmodified into the (Timestamped)KeyValue store (cf. https://issues.apache.org/jira/browse/KAFKA-8037)
I have written a Flink job which uses Guava cache. The cache object is created and used in a run() function called in the main() function.
It is something like :
main() {
run(some,params)
}
run() {
//create and use Guava cache object here
}
If I run this Flink job, with some level of parallelism, will all of the parallel tasks, use the same cache object? If not, how can I make them all use a single cache?
The cache is used inside a process() function for a stream. So it's like
incoming_stream.process(new ProcessFunction() { //Use Guava Cache here })
You can think of my use case as of cache based deduping, so I want all of the parallel tasks to refer to a single cache object
Using a Guava cache with Flink is usually an anti-pattern. Not that it can't be made to work, but there's probably a simpler and more scalable solution.
The standard approach to deduplicating in a thoroughly scalable, performant way with Flink is to partition the stream by some key (using keyBy), and then use keyed state to remember the keys that have been seen. Flink's keyed state is managed by Flink in a way that makes it fault tolerant and rescalable, while keeping it local. Flink's keyed state is a sharded key/value store, with each instance handling all of the events for some portion of the key space. You are guaranteed that for each key, all events for the same key will be processed by the same instance -- which is why this works well for deduplication.
If you need instead that all of the parallel instances have a complete copy of some (possibly evolving) data set, that's what broadcast state is for.
Flink tasks run on multi JVMs or machines,so the issue is how to share objects between JVM.
Normally,you can acquire objects from remote JVM by RPC (via tcp) or rest (via http) call.
Alternatively,you may serialize objects and store them to database like reids,then read from database and deserialize to objects.
In Flink,there is a more graceful way to achive this,you can store objects in state,and broadcast_state may fit you.
Broadcast state was introduced to support use cases where some data coming from one stream is required to be broadcasted to all downstream tasks
Hope this helps.
I am looking for a way to access the remaining TTL of a redis key value pair via laravel. I don't mind using either the Cache or Redis facades (or anything else for that matter).
In the api I can only see how to return the default TTL - getDefaultCacheTime().
I want to find the remaining TTL.
For reference, the redis terminal command is TTL mykey
Since there's a command method on the Illuminate\Redis\Database class you can simply run :
Redis::command('TTL', ['yourKey']);
This is documented here.
Turns out (with the recent versions of laravel anyway) that you can use redis commands and they will be converted using magic methods. So you can simply use
Redis::ttl('yourKey');
The setup is suppose I have 2 memcached servers, and 2 web servers to connect to these two memcached servers. I am using spymemcached client.
Suppose:
1) web1 insert a key "abc" to memcached. Based on some mechanism it stored in memcached1.
2) When web2 tries to get key "abc" how does it know that it will go to memcached1 to get the key?
Do I need to do any special setting in spymemcached client side to make sure that the memcached server that will store a key is always determined?
You do not need any special logic to do this. Memcached is a distributed cache and works by hashing keys to different servers. As long as you list all of the servers in your memcached cluster you should have no problem.
I also want to note that the one parameter you can change is the hashing algorithm that is used by the clients. This can be done in whatever ConnectionFactory class you use to build your connection.
Typically, the same key would never be stored on two different memcached servers. This is because the memcached client would use some algorithm to find out which server a key should be stored at and would identify the same server again on a lookup with the given id.
A typical algorithm used could be
server_id = key.hashCode()%N, where N is the number of memcached servers identified by numbers
from 0 to N-1