I took a data set of 10 collections from api xyz, then updated it to firebase for the first time.
On the second update, api xyz has 9 collections available, How to make the 2nd update with only 9 collections (the existing ones being overwritten and the extra collections from the first update should be deleted)?
I used to delete all the collections and then update but that method is not appropriate.
For you to perform bulk updates on your collections, you need to use
Transactions and Batched Writes - as I imagine that you are already using to perform the update in the first time.
If you just want to overwrite the old date, you need to configure a set() method just to update the collection, without needing to check the existence of any document on it. This way, it should overwrite the data already existed. You can get more information on this question here, where a customer describes the function that he is using that is overwriting data.
Let me know if the information helped you!
Related
I have a table that basically represents uploads, therefore, when an instance of the model representing this table is deleted, I want the file being represented to be deleted from my uploads folder.
The way I've gone about this thus far is basically overriding the delete method, so that, before the model instance is deleted, the file will be as well.
Problem: not only does this not work for cascade deletions, it also doesnt work if I delete a Collection....
I've looked at Events, like Model::deleting, but they suffer from exactly the same problem (namely they're not triggered by cascade deletions or bulk deletions).
I have also considered a SQL trigger, but it doesnt seem like I can delete files from SQL (inform me if I can, I'd love it! I'm using MySQL, btw).
Do I have an option that is classier than just making a separate query and iterating over it deleting the files every time I need to do a bulk deletion/cascade, or is this really it?
Take a look at https://laravel-news.com/laravel-model-events-getting-started
You need to define an event inside your model.
I am new to Cassandra. I am looking at many examples online. Here is one from JHipster Cassandra examples on GitHub:
https://gist.github.com/jdubois/c3d3bedb869466731316
The repository save(user) method does a read (to look for existence) then a delete and re-insert of the existing user across all the denormalized tables whenever the user data changed.
Is this best practice?
Is this only because of how the data model for this sample is designed?
Is this sample's design a result of twisting a POJO framework into a NoSQL database design?
When would I want to just do a update in Cassandra? It supports updates at the field-level, so it seems like that would be preferred.
First of all, the delete operations should be part of the batch for more robust error handling. But it looks like there are also some concurrency issues with the code. It will update the user based on the current user value read before. It's not save to assume this will still be the latest value while save() is actually executed. It will also just overwrite any keys in the lookup table that might be in use for a different user at that point. E.g. the login could already exist for another user while executing insertByLoginStmt.
It is not necessary to delete a row before inserting a new one.
But if you are replacing rows and new columns are different from existing columns then you need to delete all existing columns and insert new columns. Or insert new and delete old, does not matter if happens in batch.
I'm not sure whether I need a way to lock records returned by the context or simply need a new approach.
Here's the story. We currently have a small number of apps that integrate with our CRM. Some of them open a XrmServiceContext and return a few thousand record to perform updates. These scripts are calling SaveChanges along the way but there will still be accounts near the end that will be saved a couple of minutes after the context return them. If a user updates the record during this time, their changes are overwritten by the script.
Is there a way of locking the records until the context has saved the update back or is there a better approach I should be taking?
Kit
In my opinion, this type of database transaction issue is what CRM is currently lacking the most. There is no way to ensure that someone else doesn't monkey with your data, it's always a last-one-in-wins world in CRM.
With that being said, my suggestion would be to only update the attributes you care about. If you're returning all columns for an entity, when you update that entity, you're possibly going to update all the attributes of the entity, even if you only updated one of them.
If you're dealing with a system were you can't tolerate the last-one-in-wins mentality, then you're probably better off not using CRM.
Update 1
CRM 2015 SP1 and above supports Optimistic Updates. Which allows the use of a version number to ensure that no one has updated the record since you retrieved it.
You have a several options here, it just depends on what you want to do. First of all though, if you can move some of these automated processes to off-time hours, then that's the best option.
Another option would be to retrieve each record 1 by 1 instead of by 1000+.
If you are only updating a percentage of the records retrieved, then you would be better off to check before saving if an update occurred (comparing the modified date). If the modified date changed, then you need to do a single retrieve and then save.
At first thought, I would create a field or status that indicates a pending operation and then use JScript in the form OnLoad event to warn/lock the form. When you process completes, it could clear the flag.
I am working on a WP7 app that contains
CategoryGroups
Categories
Products
The rows for each of these entities are populated on first run of the application.
The issues is that when the app gets published, the rows in each of the entities will change (added, deleted, modified). I would like some suggestions on how I should handle this? Any pointers to existing code samples will be great?
I am using an object oriented database to store my entities. The app also allows the user to add their own entities (which get added to the database as personalized (flagged) entities). One solution I was thinking was to read an xml file from the server and then loop through the database entries and make the necessary modifications in the database. So, on the first run, all the entities will just get inserted. On subsequent runs, if the version number attribute in xml is different, then the system populated data is reloaded from xml but the user data is preserved.
Also, maybe only check for the new xml file on the server when internet connection is available and only periodically (like every 2 weeks).
Any other suggestions are welcome. If there is a simpler, cleaner way - please share.
Pratik
I think it's fair to say that this question has nothing to do with WP7 and everything to do with finding an efficient way to to compute and deliver update deltas.
Timestamp your items. When requesting an update, specify the time of last update. You server can trivially query for items newer than this and return a delta. At the client (ie in the phone) it is not necessary to store a last update time because you can simply add one second to the most recent timestamp in the items present on the phone.
is there a way of knowing ID of identity column of record inserted via InsertOnSubmit beforehand, e.g. before calling datasource's SubmitChanges?
Imagine I'm populating some kind of hierarchy in the database, but I wouldn't want to submit changes on each recursive call of each child node (e.g. if I had Directories table and Files table and am recreating my filesystem structure in the database).
I'd like to do it that way, so I create a Directory object, set its name and attributes,
then InsertOnSubmit it into DataContext.Directories collection, then reference Directory.ID in its child Files. Currently I need to call InsertOnSubmit to insert the 'directory' into the database and the database mapping fills its ID column. But this creates a lot of transactions and accesses to database and I imagine that if I did this inserting in a batch, the performance would be better.
What I'd like to do is to somehow use Directory.ID before commiting changes, create all my File and Directory objects in advance and then do a big submit that puts all stuff into database. I'm also open to solving this problem via a stored procedure, I assume the performance would be even better if all operations would be done directly in the database.
One way to get around this is to not use an identity column. Instead build an IdService that you can use in the code to get a new Id each time a Directory object is created.
You can implement the IdService by having a table that stores the last id used. When the service starts up have it grab that number. The service can then increment away while Directory objects are created and then update the table with the new last id used at the end of the run.
Alternatively, and a bit safer, when the service starts up have it grab the last id used and then update the last id used in the table by adding 1000 (for example). Then let it increment away. If it uses 1000 ids then have it grab the next 1000 and update the last id used table. Worst case is you waste some ids, but if you use a bigint you aren't ever going to care.
Since the Directory id is now controlled in code you can use it with child objects like Files prior to writing to the database.
Simply putting a lock around id acquisition makes this safe to use across multiple threads. I've been using this in a situation like yours. We're generating a ton of objects in memory across multiple threads and saving them in batches.
This blog post will give you a good start on saving batches in Linq to SQL.
Not sure off the top if there is a way to run a straight SQL query in LINQ, but this query will return the current identity value of the specified table.
USE [database];
GO
DBCC CHECKIDENT ("schema.table", NORESEED);
GO