We can store multiple keys under one cache, is it possible to set expiration for key level rather than cache level in ABP?
MyCache
Key1
Key2
Key3
I need to set different expiration time for MyCache and Key1, Key2, Key3
Thanks in Advance.
You can set sliding expire time when you set the cache. See
slidingExpireTime parameter.
/// <param name="key">Key</param>
/// <param name="value">Value</param>
/// <param name="slidingExpireTime">Sliding expire time</param>
/// <param name="absoluteExpireTime">Absolute expire time</param>
void Set(string key, object value, TimeSpan? slidingExpireTime = null, TimeSpan? absoluteExpireTime = null);
https://github.com/aspnetboilerplate/aspnetboilerplate/blob/dev/src/Abp/Runtime/Caching/ICache.cs
Related
I need to insert and get the below key value pairs in Redis. As I am new to Redis, which can I choose: either Redis timeseries or sortedsets for below set of data:
For BoxNo - 1 below key value pairs:
key1- value1
key2- value2
key3 -value3
. . . . 200key values
For BoxNo-2 below value pairs:
key4 - value4
key5- value5
. . . . 150 key values
The above data need to insert in Redis as key value pairs and I want to retrieve key-value pairs every 15 mins from Redis.
As per my understanding regarding Redis timeseries, datastructure for my data can be like:
Label boxno -1
key key1 value -value1
However in timeseries need to create a timeseries with key every time.
Is my understanding correct?
I am using caffeine cache and looking for a way to update the value in cache without changing its expire time.
The scenario is that I am using cache for speed up data loading. A 5 seconds' delay of data changing is acceptable while I expect returns to be fast. Besides, I want these cache to expiry after 1 day of its first hit to avoid unnecessary memory use.
Thus, I want every cached keys lasts for one day but its value is updated every 5 second.
The refreshAfterWrite method seems to be close but its first returned value after refresh duration is still the old one. This is not ideal for me because the duration between two hits can be hours. In this case I still want a relatively new result (no more than 5 seconds).
So I am trying to manually updating each key.
Firstly I built a cache with 24 hours' expire time in this way:
cache = Caffeine.newBuilder()
.expireAfterWrite(24, TimeUnit.HOURS)
.build();
Then I wrote a scheduled task per 5 seconds which iterate keys in cache and do following:
cache.asMap().computeIfPresent(key, mapperFunction);
Then I checked the age of the key:
cache.policy().expireAfterWrite().get().ageOf(key)
However, the age is not growing as expected. I think the computeIfPresent method is considered as a "write" so that the expiry time is also updated.
Is there a way to do the value update without change its expire time in caffeine?
Or any other approach for my scenario?
A write is the creation or update of a mapping, so expireAfterWrite is not a good fit for you. Instead you can set a custom expiration policy that sets the initial duration and does nothing on a read or update. This is done using expireAfter(Expiry), such as
LoadingCache<Key, Graph> graphs = Caffeine.newBuilder()
.expireAfter(new Expiry<Key, Graph>() {
public long expireAfterCreate(Key key, Graph graph, long currentTime) {
return TimeUnit.HOURS.toNanos(24);
}
public long expireAfterUpdate(Key key, Graph graph,
long currentTime, long currentDuration) {
return currentDuration;
}
public long expireAfterRead(Key key, Graph graph,
long currentTime, long currentDuration) {
return currentDuration;
}
})
.build(key -> createExpensiveGraph(key));
Set Up
Kafka 2.5
Apache KStreams 2.4
Deployment to Openshift(Containerized)
Objective
Group a set of messages from a topic using a set of value attributes & assign a unique group identifier
-- This can be achieved by using selectKey and groupByKey
originalStreamFromTopic
.selectKey((k,v)-> String.join("|",v.attribute1,v.attribute2))
.groupByKey()
groupedStream.mapValues((k,v)->
{
v.setGroupKey(k);
return v;
});
For each message within a specific group , create a new message with an itemCount number as one of the attributes
e.g. A group with key "keypart1|keyPart2" can have 10 messages and each of the message should have an incremental id from 1 through 10.
aggregate?
count and some additional StateStore based implementation.
One of the options (that i listed above), can make use of a couple of state stores
state store 1-> Mapping of each groupId and individual Item (KTable)
state store 2 -> Count per groupId (KTable)
A join of these 2 tables to stamp a sequence on the message as they get published to the final topic.
Other statistics:
Average number of messages per group would be in some 1000s except for an outlier case where it can go upto 500k.
In general the candidates for a group should be made available on the source within a span of 15 mins max.
Following points are of concern from the optimum solution perspective .
I am still not clear how i would be able to stamp a sequence number on the messages unless some kind of state store is used for keeping track of messages published within a group.
Use of KTable and state stores (either explicit usage or implicitly by the use of KTable) , would add to the state store size considerably.
Given the problem involves some kind of tasteful processing , the state store cant be avoided but any possible optimizations might be useful.
Any thoughts or references to similar patterns would be helpful.
You can use one state store with which you maintain the ID for each composite key. When you get a message you select a new composite key and then you lookup the next ID for the composite key in the state store. You stamp the message with the new ID that you just looked up. Finally, you increase the ID and write it back to the state store.
Code-wise, it would be something like:
// create state store
StoreBuilder<KeyValueStore<String,String>> keyValueStoreBuilder = Stores.keyValueStoreBuilder(
Stores.persistentKeyValueStore("idMaintainer"),
Serdes.String(),
Serdes.Long()
);
// add store
builder.addStateStore(keyValueStoreBuilder);
originalStreamFromTopic
.selectKey((k,v)-> String.join("|",v.attribute1,v.attribute2))
.repartition()
.transformValues(() -> new ValueTransformer() {
private StateStore state;
void init(ProcessorContext context) {
state = context.getStateStore("idMaintainer");
}
NewValueType transform(V value) {
// your logic to:
// - get the ID for the new composite key,
// - stamp the record
// - increase the ID
// - write the ID back to the state store
// - return the stamped record
}
void close() {
}
}, "idMaintainer")
.to("output-topic");
You do not need to worry about concurrent access to the state store because in Kafka Streams same keys are processed by one single task and tasks do not share state stores. That means, your new composite keys with the same value will be processed by one single task that exclusively maintains the IDs for the composite keys in its state store.
In some countries weekend days are Friday/Saturday.
How can a Windows application find out weekend days of the user?
Wellll...I don't know of a "One function" answer to this. You're gonna need to know where they are somehow. If it's a webapp, you can trace their IP and figure out what country they are from. If it's a windows app, you're probably going to need to ask them (The clock only provides timezone information, and i can't figure out where else to grab a more fine-grained location from windows).
You can figure out what day it is with GetDayofWeek http://msdn.microsoft.com/en-us/library/1wzak8d0%28VS.80%29.aspx in MFC
DayofWeek if you hop to .Net http://msdn.microsoft.com/en-us/library/system.dayofweek.aspx
You'll need a lookup table with countries/what days they consider weekends..you'll probably have to construct this, but you can get a list of countries from: http://www.iso.org/iso/english_country_names_and_code_elements
That list is ISO 3166.
It's updated and should be your "one-stop-shop" for the listing. From there, you'll match "weekends" to the countries. http://en.wikipedia.org/wiki/Workweek might help in figuring out weekends/workweeks for countries.
The following code will provide whether or not it is considered the weekend, with an option for different cultures (where the weekend starts/ends on a different day):
/// <summary>
/// Returns true if the specified date is weekend in given culture
/// is in.
/// </summary>
public static bool IsItWeekend(DateTime currentDay, CultureInfo cultureInfo)
{
bool isItWeekend = false;
DayOfWeek firstDay = cultureInfo.DateTimeFormat.FirstDayOfWeek;
DayOfWeek currentDayInProvidedDatetime = currentDay.DayOfWeek;
DayOfWeek lastDayOfWeek = firstDay + 4;
if (currentDayInProvidedDatetime == lastDayOfWeek + 1 || currentDayInProvidedDatetime == lastDayOfWeek + 2)
isItWeekend = true;
return isItWeekend;
}
ICU project might help. It is designed for software internalization and globalization. C/C++ and Java version are available.
icu-project.org
I have a window that looks like so:
Everytime a record is added, I want the repair ID to be set to a unique number that hasn't been used in the table yet. e.g. if there is ID numbers 1,2,3, then when I press +, the ID field should be set to '4'.
Also, if one of the records in the table is deleted, so that the ID numbers are: 1,2,4, then when I press +, the number in the Record ID should be set to 3.
At the moment, I have a custom ManagedObject class, where I declare:
-(void)awakeFromInsert {
[self setValue:[NSDate date] forKey:#"date"];
}
In order to set the date to today's date.
How would I go about implementing this unique record ID?
Thanks!
For a pure autoincrementing ID (like was asked for in this question), something like what's described in this message may do the job. Unfortunately, that won't provide values that fill in the blanks for deleted items in your list.
For small amount of records, simply loop through them until you find a free ID. Pseudocode here since I don't know your language:
int RepairID=1;
While isUsed(RepairID) {
RepairID=RepairID+1;
}
return RepairID;
For large numer of records you can keep track of a list of deleted ID:s and the highest ID. When adding a record pick the smallest deleted ID, or if no deleted ids left to reuse, use the highest ID+1
I've used the numerical form of the date before (with a quick check to make sure it's actually unique - ie, the clock hasn't been adjusted). +[NSDate timeIntervalSinceReferenceDate] returns an NSTimeInterval (which is a typedef double). This I believe is independent of time zones, "daylight saving", etc.
The only weakness, as I alluded earlier, is the clock being adjusted, but you could always make sure it's unique. If you have more requirements than what you listed, let me know. I have a few myself, and what I believe to be sufficient work-arounds.
If your using an Array Controller in your interface you can use the count method on its arrangedObjects array to create your ID. You can implement by overriding your -(id)newObject method
//implemented in yourArrayController.m
-(id)newObject
{
//call the super method to return an object of the entity for the array controller
NSManagedObject* yourNewObject = [super newObject];
//set the ID the count of the arrangedObjects array
NSNumber* theID = [NSNumber numberWithInteger:[[self arrangedObjects] count]];
[yourNewObject setValue:theID forKey:#"ID"];
//return your new Object
return yourNewObject;
}