I wanted to get a record from the aerospike. So, I was using the Client.Get method.
However, whenever I do a Get I also want to refresh the TTL of the record. So, usually we use a WritePolicy which allows us to set a ttl. But then the Get method accepts only a BasePolicy
Is the following way correct or is there a better way of doing this?
client.Get(nil, key, bin)
client.Touch(myWritePolicy, key)
Do it within an operate() command, you can touch() as well as get() in the same lock, one network trip. Note, if your record is stored on disk, updating TTL, however you do it, will entail a new write of the record to a different location on the disk because TTL info is stored in the record metadata.
Related
In redis, is there a way to mark a key as expired but don't actually drop it from the database? A use case for this would be to let a client know that they need to pull in fresh data from a producer of that data, but if the producer is unreachable, still have the stale data available in the cache. This would make it simpler as one would not have to leverage a separate database / redis cache for stale data and data that is meant to expire to trigger an update to the cache.
There's no builtin way to do that.
In order to achieve the goal, you have to do the expire work by yourself. You can save the data into a hash with last modification time:
hset key data value time last-modification-time
When you retrieve the data, you can compare the last-modification-time and current time to check if the data is stale.
When you update the value, also update the last-modification time with current time.
In order to make it easy to use, and atomic, wrap these logic into a Lua script.
Implement cache in TIBCO BW
I need to implement cache/memory in TIBCO BW.
Contents of this cache should be available across BW projects.
What I am trying to do is, when I receive a message containing multiple shipment records - (Shipment and delivery No is unique combination)
I need to first check in cache if any of these records exist.
If yes - reject whole XML
If not, then push this data into cache/memory.
Once this is done then I need to call SOAP request reply to external system.
In Another project, when Acknowledgement is received from external system, I need to check the records in the message, find out those records in cache and delete them.
Is there any way to do this?
Challenge here is there is no unique key for a whole message. Each record with combination of shipment/delivery is unique.
Here is what I tried and challenge in it:
1) I thought of putting the data in a file and naming the file as the requestID/Key for each message.
Then in another project, check the file and delete it
But since we dont have key, I cannot do that.
2) Using shared variables:
I believe shared variables will not be available across bw projects. So, this option is out
3) third option is to use EMS queue, park the message temporarily there containing records.
Then search in this, and if the records match reject the request.
And, in acknowledgement (another project), search for the records in ems message and delete that particular message.
Any help on this would be appreciated.
thanks
Why don't you use database or file to store the records ?
Because when you stop or restart or a problem occured in the appnode, the cache will be erased and you will not be able to retreive the records not treated.
4th option is using tibco activity 'Java Global instance' for implementing cache in Java.
"The Java Global Instance shared configuration resource allows you to
specify a Java object that can be shared across all process instances
in a Java Virtual Machine (JVM). When the process engine is started,
an instance of the specified Java class is constructed."
You can google ton's of cache implementation in Java for example https://crunchify.com/how-to-create-a-simple-in-memory-cache-in-java-lightweight-cache/
Regarding your statement:
"Challenge here is there is no unique key for a whole message. Each record with combination of shipment/delivery is unique"
So, you do have unique key - combination of shipment/delivery is unique.
If record size is not too huge you can use the record as key "as is" or create unique hash for each record and use it as key if key size is an issue.
In Meteor, I have a little confusion between Session and Local Collection.
I know that Session is a temporary reactive key-value store, client-side only, and is cleaned on page refresh.
Local collection seems to be the same: reactive, temporary client-side storage, cleaned on page refresh with more flexible function like insert, update & remove query like server-side Mongo collection.
So I guess I could manage everything in Local Collection without Session, or, everything in Session without Local Collection.
But what is the best and efficient way to use Session and/or Local collection?
Simply, when to use Session and not use it?
And when to use Local collection and when not use it?
As I read your question I told myself that this is a very easy question, but then I was scratching my head. I tried to figure out an example that you can just accomplish with session or collections. But I didn't found any use-case. So let's rollup things from begin. Basically you already answered the question on your own, because it is the little sugar that makes collections something special.
When to use a collection?
Basically a collection is a database artifact. Imagine you have a client-server-application. All the data is persisted in the server side storage. Now you can use a local collection to provide the user a small subset of the servers collection. So a client collection is a database with reduced amount of data. The advantage is that you can access the collection with queries. You can use the same queries on server and client. In additon a collection always contains multiple objects of the same type. Sometimes you produce data on client for the client. No server interaction needed. Than you can use a local collection. A local collection provides the same functionality as a normal collection without server communication. This should be used if you have multiple objects with the same structure and in special if you'd like to use query operators.
You can also save the data inside a session object. Session objects can contain multiple objects as well. But imaging you want to find an object in an objectarray indexed with a special id. Than you need to iterate throw the whole array in order to find this object. You have to write additional logic, that can be handled with collection like magic. Further, collections return cursors. A cursor is an reactive object that just changes if the selected data changes. That means if you use find with an id. Than this object just rerenders when the object to this id changes. With session you can't. When a session changes you need to rerender all depending objects.
When to use a session?
For everything else. Sessions are often just small objects that contain some configuration logic. It is basically just one object and not a multiple occurency of equal objects. Haven't time now to go in detail but if it does not fit the collection use-cases you can use sessions.
Have a look at this post that describes why sessions should not be overused.
I assume that by local collection you mean: new Mongo.Collection(null)
The difference is that local collections do not survive hot code pushes. A refresh will erase Session, but hot code push will not, there's special code in Meteor to persist the values of the Session variable in the case of a hot code push..
You would use Session whenever you're storing temporary values that do NOT need to be persisted to the database.
Trivial examples could include a users selection of filters or the item in an index vies that is currently selected.
manipulated data in minimongo (insert, update, delete etc) is intended to be sent back to the server and stored in the database. For example this could be updating a users profile information etc.
I have a Message entity that has a messageID property. I'd like to ensure that there's only ever one instance of a Message entity with a given messageID. In SQL, I'd just add a unique constraint to the messageID column, but I don't know how to do this with Core Data. I don't believe it can be done in the data model itself, so how do you go about it?
My initial thought is to use a validation method to do a fetch on the NSManagedObject's context for the ID, see if it finds anything but itself, and if so, fails the validation. I suspect this will work - but I'm worried about the performance of something like that. I went through a lot of effort to minimize the fetch requests needed for the entire import routine, and having it validate by performing a fetch for every single new message entity seems a bit excessive. I can get all pre-existing objects I need and identify all the new objects I need to insert into the store using just two fetch queries before I do the actual work of importing and connecting everything together. This would add a fetch to every single update or insert in addition to those two - which would seem to eliminate any performance advantage I had by pre-processing the import data in the first place!
The main reason this is an issue is that the importer can (potentially) run several batches concurrently on several threads and may include some overlapping/duplicate data that needs to ultimately result in just one object in the store and not duplicate entries. Is there a reasonable way to do this and does what I'm asking for make sense for Core Data?
The only way to guarantee uniqueness is to do a fetch. Fortunately you can just do a -countForFetchRequest:error: and check to see if it is zero or not. That is the least expensive way to guarantee uniqueness at this time.
You can probably accomplish this in the validation or run it in the loop that is processing the data. Personally I would do it above the creation of the NSManagedObject so that you do not have the unnecessary allocs when a record already exists.
I don't think there is a way to easily guarantee an attribute is unique without doing a lot of work on your own. You can, of course use CFUUIDCreate to create a globally unique UUID, which should be unique, even in a multithreaded environment. But...
The objectID (type NSManagedObjectID) of all managed objects is guaranteed to be unique within the persistent store coordinator. Since you can add arbitrarily many persistent stores to the coordinator, this guarantee basically guarantees that the objectIDs are globally unique. Why don't you use the objectID as your messageID? You can't, of course, change the objectID once it's assigned (and it won't get assigned until the context containing the inserted object is saved; until then it will be a temporary but still unique ID).
So you have a NSManagedContext for each thread, backed by the same persistent store, is that correct? And before you save the NSManagedContext, you'd like to make sure the messageID is unique, that is, that you are not updating an existing row, and that it is not in one of the other contexts, correct?
Given that model (correct me if I misunderstand), I think you'd be better served having one object that manages access to the persistent store. That way, all threads would update one context and you can do your validation in there, using Marcus's -countForFetchRequest:error: suggestion. Granted, that places a bottleneck on this operation.
Just to add my 2 cents: I think inconsistencies will occur sooner or later anyway, and the only way to mitigate them seems to be to do it on an application-level with rather complex code.
So in my case I decided to allow duplicate values for what are supposed to be "unique" fields.
I added code, however, that detects these problems later (e.g. when a fetch that should return 1 object returns more than 1) and fixes them when they occur (usually by deleting).
It's a "go ahead, make a mistake, ill fix it later for you"-strategy.
This is not ideal, of course, but a valid way to attack this problen, imho.
is there a way of knowing ID of identity column of record inserted via InsertOnSubmit beforehand, e.g. before calling datasource's SubmitChanges?
Imagine I'm populating some kind of hierarchy in the database, but I wouldn't want to submit changes on each recursive call of each child node (e.g. if I had Directories table and Files table and am recreating my filesystem structure in the database).
I'd like to do it that way, so I create a Directory object, set its name and attributes,
then InsertOnSubmit it into DataContext.Directories collection, then reference Directory.ID in its child Files. Currently I need to call InsertOnSubmit to insert the 'directory' into the database and the database mapping fills its ID column. But this creates a lot of transactions and accesses to database and I imagine that if I did this inserting in a batch, the performance would be better.
What I'd like to do is to somehow use Directory.ID before commiting changes, create all my File and Directory objects in advance and then do a big submit that puts all stuff into database. I'm also open to solving this problem via a stored procedure, I assume the performance would be even better if all operations would be done directly in the database.
One way to get around this is to not use an identity column. Instead build an IdService that you can use in the code to get a new Id each time a Directory object is created.
You can implement the IdService by having a table that stores the last id used. When the service starts up have it grab that number. The service can then increment away while Directory objects are created and then update the table with the new last id used at the end of the run.
Alternatively, and a bit safer, when the service starts up have it grab the last id used and then update the last id used in the table by adding 1000 (for example). Then let it increment away. If it uses 1000 ids then have it grab the next 1000 and update the last id used table. Worst case is you waste some ids, but if you use a bigint you aren't ever going to care.
Since the Directory id is now controlled in code you can use it with child objects like Files prior to writing to the database.
Simply putting a lock around id acquisition makes this safe to use across multiple threads. I've been using this in a situation like yours. We're generating a ton of objects in memory across multiple threads and saving them in batches.
This blog post will give you a good start on saving batches in Linq to SQL.
Not sure off the top if there is a way to run a straight SQL query in LINQ, but this query will return the current identity value of the specified table.
USE [database];
GO
DBCC CHECKIDENT ("schema.table", NORESEED);
GO