I have a table that basically represents uploads, therefore, when an instance of the model representing this table is deleted, I want the file being represented to be deleted from my uploads folder.
The way I've gone about this thus far is basically overriding the delete method, so that, before the model instance is deleted, the file will be as well.
Problem: not only does this not work for cascade deletions, it also doesnt work if I delete a Collection....
I've looked at Events, like Model::deleting, but they suffer from exactly the same problem (namely they're not triggered by cascade deletions or bulk deletions).
I have also considered a SQL trigger, but it doesnt seem like I can delete files from SQL (inform me if I can, I'd love it! I'm using MySQL, btw).
Do I have an option that is classier than just making a separate query and iterating over it deleting the files every time I need to do a bulk deletion/cascade, or is this really it?
Take a look at https://laravel-news.com/laravel-model-events-getting-started
You need to define an event inside your model.
Related
I've seen that sometimes CREATE is used to create nodes, and in other situations, MERGE is used. What's the difference, and when should one be used in place of another?
CREATE does just what it says. It creates, and if that means creating duplicates, well then it creates.
MERGE does the same thing as CREATE, but also checks to see if a node already exists with the properties you specify. If it does, then it doesn't create. This helps avoid duplicates.
Here's an example: I use CREATE twice to create a person with the same name.
CREATE should be used when you are absolutely certain that the information doesn't exist in the database (for example, when you are loading data). MERGE is used whenever there is a possibility that the node or relationship already exists and you don’t need to duplicate it. MERGE shouldn't always be used as it’s considerably slower than the create clause.
I am new to Cassandra. I am looking at many examples online. Here is one from JHipster Cassandra examples on GitHub:
https://gist.github.com/jdubois/c3d3bedb869466731316
The repository save(user) method does a read (to look for existence) then a delete and re-insert of the existing user across all the denormalized tables whenever the user data changed.
Is this best practice?
Is this only because of how the data model for this sample is designed?
Is this sample's design a result of twisting a POJO framework into a NoSQL database design?
When would I want to just do a update in Cassandra? It supports updates at the field-level, so it seems like that would be preferred.
First of all, the delete operations should be part of the batch for more robust error handling. But it looks like there are also some concurrency issues with the code. It will update the user based on the current user value read before. It's not save to assume this will still be the latest value while save() is actually executed. It will also just overwrite any keys in the lookup table that might be in use for a different user at that point. E.g. the login could already exist for another user while executing insertByLoginStmt.
It is not necessary to delete a row before inserting a new one.
But if you are replacing rows and new columns are different from existing columns then you need to delete all existing columns and insert new columns. Or insert new and delete old, does not matter if happens in batch.
I need to insert a field in the middle of current fields of a database table. I'm currently doing this in VB6 but may get the green light to do this in .net. Anyway I'm wondering since Access gives you the ability to "insert" fields in the table is there a way to do this in ADOX? If I had to I could step back and use DAO, but not sure how to do it there either.
If yor're wondering why I want to do this applications database has changed over time and I'm being asked to create Upgrade program for some of the installations with older versions.
Any help would be great.
This should not be necessary. Use the correct list of fields in your queries to retrieve them in the required order.
BUT, if you really need to do that, the only way i know is to create a new table with the fields in the required order, read the data from the old table into the new one, delete the old table and rename the new table as the old one.
I hear you: in Access the order of the fields is important.
If you need a comprehensive way to work with ADOX, your go to place is Allen Browne's website. I have used it to from my novice to pro in handling Access database changes. Here it is: www.AllenBrowne.com. Go to Access Tips then scroll down to ADOX Code.
That is also where I normally refer people with doubts about capabilities of Access as a database :)
In your case, you will juggle through creating a new table with the new field in the right position, copying data to the new table, applying properties to the fields, deleting original table, renaming the new table to the required (original) name.
That is the correct order. Do not apply field properties before copying the data. Some indexes and key properties may not be applied when the fields already have data.
Over time, I have automated this so I just run an application to do detect and implement the required changes for me. But that took A LOT of work-weeks.
I'm having some trouble with Visual Studio and the creation of DataSets from a database.
Whenever I create a new TableAdapter, the Insert-Methods parameters are, lets just say, it messes up.
The database is a MS Access 2000 Database file. If I create a new TabelAdapter, everything works just fine. I select to create DatabaseDirect Methods and it all goes through without errors.
Then, I look at the statements. All perfectly fine. But then, I check the Insert-Methods parameters and I see this:
Parameter List http://img243.imageshack.us/img243/3175/paramlist.png
All the parameters are set to default Strings with no name. I have to rename and define all of their types over again.
Interesting thing is, this does never affect the last parameter (As you see: Comment is not renamed etc) and it only happens to the Insert-Method. When I check the Update-Method (which also uses the exact same parameters), they are all correctly named and the type also fits the one in the databse.
Parameter list http://img816.imageshack.us/img816/853/paramlistnormal.png
Is this a known bug? Did I do something wrong when creating the TableAdapter?
You see, it's not that big an issue, I just can't understand why it works with every other method, just not the Insert and it is quite a fuss to rename and retype all of the parameters if you create a table adapter for a table that has significantly more fields than just the 12 I showed you.
It looks like at least one other person has had a similar problem. Although this post doesn't specifically mention Access, the symptoms seem to be the same as what you've seen.
Unfortunately, there wasn't a clear solution listed there. The OP only says that he was able to call the automatically-generated Insert command, rather than trying to create his own Insert query, and so he did not need to resolve his original issue.
Also, he mentions that everything seems to work fine with all of the other tables in his database, and that this happens with only one table. That may mean that it's not an Access-specific issue, but rather that the tables in your database have something in common with the table in this post, and that common factor is what is preventing the TableAdapter from working as it should.
is there a way of knowing ID of identity column of record inserted via InsertOnSubmit beforehand, e.g. before calling datasource's SubmitChanges?
Imagine I'm populating some kind of hierarchy in the database, but I wouldn't want to submit changes on each recursive call of each child node (e.g. if I had Directories table and Files table and am recreating my filesystem structure in the database).
I'd like to do it that way, so I create a Directory object, set its name and attributes,
then InsertOnSubmit it into DataContext.Directories collection, then reference Directory.ID in its child Files. Currently I need to call InsertOnSubmit to insert the 'directory' into the database and the database mapping fills its ID column. But this creates a lot of transactions and accesses to database and I imagine that if I did this inserting in a batch, the performance would be better.
What I'd like to do is to somehow use Directory.ID before commiting changes, create all my File and Directory objects in advance and then do a big submit that puts all stuff into database. I'm also open to solving this problem via a stored procedure, I assume the performance would be even better if all operations would be done directly in the database.
One way to get around this is to not use an identity column. Instead build an IdService that you can use in the code to get a new Id each time a Directory object is created.
You can implement the IdService by having a table that stores the last id used. When the service starts up have it grab that number. The service can then increment away while Directory objects are created and then update the table with the new last id used at the end of the run.
Alternatively, and a bit safer, when the service starts up have it grab the last id used and then update the last id used in the table by adding 1000 (for example). Then let it increment away. If it uses 1000 ids then have it grab the next 1000 and update the last id used table. Worst case is you waste some ids, but if you use a bigint you aren't ever going to care.
Since the Directory id is now controlled in code you can use it with child objects like Files prior to writing to the database.
Simply putting a lock around id acquisition makes this safe to use across multiple threads. I've been using this in a situation like yours. We're generating a ton of objects in memory across multiple threads and saving them in batches.
This blog post will give you a good start on saving batches in Linq to SQL.
Not sure off the top if there is a way to run a straight SQL query in LINQ, but this query will return the current identity value of the specified table.
USE [database];
GO
DBCC CHECKIDENT ("schema.table", NORESEED);
GO