I want rename an actor service and redeploy and migrate the data to the "new" (renamed) actor.
When I only rename the actor and deploy it, it will result in this error:
Services must be explicitly deleted before removing their Service
Types
Which makes sense because the name in the manifest is changed.
But what is the recommended/easiest way to rename actors and keep the data?
I can try to backup and restore the data in the new actor. But this will not fit in a CICD way of working.
Found a related question: Service Fabric: removed actors and now upgrade fails but is seems there is a lot of manual actions involved.
I would try to deploy a version which:
extend the existing actors with ability to read their state
create the new actors - when needed - by reading the state of the existing actors and then delete the read existing actor (you realy do not need it right?)
If you do not deleteOnConverted, you double the needed storage space, to convert.
Finally deploy a version, which:
delete the un-converted existing actors or convert remaining existing actors into the new type
in the new actors - disable lookup of the existing/old actors state (this is easy to forget)
Maybe you can always throw away state in a CI-setup, while a CD-setup requires state-conversion?
BTW - You should avoid to rename state, because it is a breaking change.
Related
Can any one tell me how to exclude system objects while taking backup of queue manager using save queue manager and dump queue manager commands?
Another tool that you can used instead of saveqmgr or dmpmqcfg which can exclude SYSTEM.* objects when making a backup of all your queue manager object definitions is MO71.
Personally, I think it is a bad idea to exclude SYSTEM.* objects. You might have particular values used for SYSTEM.DEFAULT.MODEL.QUEUE, SYSTEM.DEF.SVRCONN, etc... that may be important when rebuilding a queue manager.
You could write a simple shell script or batch file to copy all objects to a new MQSC file but exclude ones like SYSTEM.ADMIN.*, SYSTEM.AUTH.DATA.QUEUE, etc...
No easy way to tell dmpmqcfg to exclude the SYSTEM.* objects. You could issue the command multiple times to include all your other object prefixes, but it would be easier to simple delete the SYSTEM.* objects from the output produced.
I am trying to figure out how GlobalKTable is working and noticed that my in memory key value store is not filled in case a restart. However the documentation sounds that it is filled in case a restart since the whole data is duplicated on clients.
When I debug my application see that there is a file on /tmp/kafka-streams/category-client-1/global/.checkpoint and it is including an offset about my topic. This might be maybe necessary for stores which are persisting their data and improve restarts however since there is an offset in this file, my application skips restoring its state.
How can I be sure that each restart or fresh start includes whole data of my topic?
Because you are using in-memory store I assume that you are hitting this bug: https://issues.apache.org/jira/browse/KAFKA-6711
As a workaround, you can delete the local checkpoint file for the global store -- this will trigger the bootstrapping on restart. Or you switch back to default RocksDB store.
I saw Neo4j can run as Impermanent DB for unit testing porpouses, I'm not sure if this fits my needs. I have my data stored in neo4j the usual way (persistent) but, starts from my data, I want to let each user start an "experimental session": the users add/delete nodes and relationships, but NOT in permanent way, just experimenting with the data (after that session the edits should be lost). The edits shouldn't be saved and obiouvsly they shouldn't be visibile to the others. What's the best way to accomplish that?
Using impermanent database should work. You would
need to import the data to each new database
spring-data-neo4j is not able to connect to multiple databases (in current release), you would need to start multiple instances of your application, e.g. in a tomcat container
when your application stops (or crashes) you would obviously lose data
Or you could potentially use only 1 database with the base data being public (= visible to everyone) and then for all new nodes/relationships you can add owner property.
When querying the data you would check the property is either public or the current user.
At the end of the session you would just delete all nodes and relationships with given owner.
If you also want to edit existing data then it gets more complicated, you could create a copy of the node/relationship and somehow handle that, or if it's not too large copy whole dataset.
You can build a docker image from the neo4j base image (or build your own) and copy your graph.db into it.
Then you can have every user start a docker container from said image.
If that doesn't answer your question, more info is needed.
Normally when I backed up the core data file for my app, I would just copy the .sqlite file to another location while the app was running. But now that journaling (wal) is enabled, this does not work anymore. I cannot see a way for NSPersistentStoreCordinator or NSManagedObjectContext to write a new file. I'm guessing maybe I have 2 methods:
Close the persistent store and opening it again with #{#"journal_mode" : #"DELETE"} and then copy the .sqlite file.
Add another persistent store and maybe copy from the original ps to the new one ?
Any better ideas ?
Thank you.
Changing the journal mode will eliminate the journal files, so it's simple. I don't know that I'd trust it for your use, though-- because there's no guarantee that Core Data has actually flushed all new changes to the SQLite file. It might be OK, but there might be some in-memory changes that Core Data hasn't written out yet. This is almost certainly safe, but there's a small chance that it won't work right once in a while.
Option 2 would be safer, though more work. I'd create the second persistent store using NSPersistentStoreCoordinator's migratePersistentStore:toURL:options:withType:error: method (which the docs specifically mention as being useful for "save as" operations). Telling Core Data to create the copy for you should ensure that everything necessary is actually copied. Just don't do this on your main persistent store coordinator, because after migration, the PSC drops the reference to the original store object (the file's still there, but it's no longer used by that PSC). The steps would be
Create a new migrate-only NSPersistentStoreCoordinator and add your original persistent store file.
Use this new PSC to migrate to a new file URL.
Drop all reference to this new PSC, don't use it for anything else.
What is the best way to save data in session variables in a classic web site?
I am maintaining a classic web site and want to be able to allow my users to demo all functionality of the site, this means allowing them to delete records.
The closet example I have seen so far are the demos of Telerik controls where they are saving the dataset in sessions on first load and allowing the user to manipulate the data.
How can I achieve the same in ASP with an MS Access backend?
If you want to persist the state over multiple pages (e.g. to demo you complete application) then it's a bit tricky.
I would suggest copying the MDB file for each session and using the copied version. This would ensure that every session uses its own data.
create a version of your access db which will be used as a fresh template for each user
on session copy the template and name it after the users session ID
use the individual MDB
Note: Then only drawback I can see here is that you need to remove the unused MDB files as it can get a lot after sometime. You could do it with a scheduled task or even on session start before you create a new one.
I am not sure what you can use to check if it's used or not but check the files creation date or maybe the LDF file can help you as well (if it does not exist = unused).
You can store a connection or inclusive an object in a session variable as far you remember what kind of variable are you storing at the retrieving time. I had never stored a dataset in a session variable but I had stored a lot of arrays in session variables so you can use the ADO Getrows method to locate a complete dataset into a session variable.
How big is the Access database? If your database is small enough (relative to the server capacity, expected number of users, and so forth) then I like the idea of using a fresh copy of the database for each user that runs the demo.
With this approach, you simplify your possible code paths. Otherwise this "are we in demo mode or not?" logic will permeate a heck of a lot of your code.
I'd do it like this...
When the user begins the demo, make a copy of the Access DB for that user to use. If your db is foo.mdb, copy it to /tempdb/foo_1234567890.mdb where 1234567890 is the user's session ID.
Alter the user's connection string to point to the fresh database copy. From this point on, your app can operate like "normal" with no further modifications.
Have a scheduled task that deletes all files in /tempdb with last-modified times more than __ hours in the past. If you don't have the ability to schedule tasks on the server (perhaps you're in a shared hosting environment, etc) then you could do this at the same time you do step #1.