Do multiple versions of xcdatamodel mean that we need multiple xcmappingmodel files? - xcode

I have multiple versions of xcdatamodel files:
app1.0.xcdatamodel
app1.1.xcdatamodel
app1.2.xcdatamodel (current)
Does this mean I need multiple combinations of xcmappingmodel files to cover all upgrade scenarios?
app1.0_to_app1.1.xcmappingmodel (had this already)
app1.1_to_app1.2.xcmappingmodel (is it iterative?)
app1.0_to_app1.2.xcmappingmodel (too much?)
Thanks!

Core Data requires that you create a mapping model to go from the current version of the data store to the latest version of the data store. This means that you will need to make one that goes from v1 -> v2 and v2 -> v3 and v1 -> v3.
From Core Data Versioning and Migration Guide
Tries to find a mapping model that maps from the managed object model
for the existing store to that in use by the persistent store
coordinator. Core Data searches through your application’s resources
for available mapping models and tests each in turn. If it cannot find
a suitable mapping, Core Data returns NO and a suitable error.
Note that you must have created a suitable mapping model in order for
this phase to succeed.
As discussed in this Apple Document
Core Data Mapping

You could implement progressive data migration. Look for progressivelyMigrateURL in here http://media.pragprog.com/titles/mzcd/code/ProgressiveMigration/AppDelegate.m

The progressivelyMigrateURL is a great sample, but I don't think you actually need it as versions of your document appear as long as you develop the application so every time you need as many mapping models as the count of supported versions of the data model minus one and not more (for example you don't need the app1.0_to_app1.1.xcmappingmodel as 1.1 version is not the latest version any more). Every time when you create a new version you need just to correct a target model in every mapping model you have and add one more if needed, maybe you will need to generate new ones and remove old ones though. The fact is that migration in one stage (which doesn't force you to create more mapping models compared to the progressive one) is much much faster in runtime as you may notice.
You also don't need to create mapping models for trivial cases and either use a Lightweight Migration (use The Default Migration Process instead if concrete situation needs a mapping model which cannot be generated in runtime (of course you need to have it in your app bundle)) or migrate with a help of the mapping model created in runtime with the help of the inferredMappingModelForSourceModel:destinationModel:error: method of the NSMappingModel class and then customized in code if needed (you will need to trigger migration manually in this case by calling the migrateStoreFromURL:type:options:withMappingModel:toDestinationURL:destinationType:destinationOptions:error: method of NSMigrationManager instance as far as I understand).
Good luck!

Related

Is there a way to update a Google AutoML Translation model without having to change the API call for translation?

I am pretty new to using Google AutoML and I was wondering what the best practice was in the following scenario.
My goal is to update a Google AutoML Translate model without having to change the API call to get translations, and I am not sure if this is possible.
Currently the only way to update a AutoML Translate model is to create a new model, base it on the old one, and train it on the new examples (This is at least what seems to be the case). And when you make an API request to get a translation, you must specify which model you want to use by giving the identifier of that model. Because the old version of the model and the new version have different identifiers does this mean that every API call must be changed so that it uses the new model? Is there any way around changing the API call?
First of all, indeed the only way to update an AutoML Translate model is to create a new one, base it on the old one, and train it with the new examples. This is a clear security measure so you do not loose the old model in the process. Although on paper training with more sentences should help the model accuracy/performance, doing so might hinder the accuracy instead.
Second of all, the API call needs to be changed accordingly. You could code the API call in a way that uses the last model submitted so it does not need to be changed every time you update the model.
To do so, the first idea that comes to my mind is using a cloud function that gets triggered once a model is trained/created and stores the model-id in a bucket in GCS that the code performing the API calls recovers.
Nevertheless, the model performance should be assessed before assigning the translation calls from one model to the other, so I do not recommend simply changing it to the newest version without additional checks unless it is for testing purposes.

Changing hyperledger-composer resource definition

So as a project matures it will almost certainly be necessary to modify attributes of the resource definitions to cope with additional requirements.
Let's use two trivial examples - to add a country code to a client address, or to remove a middle initial and swap in a middle name field instead.
Currently if the resource definition changes, composer won't read whatever values are extant in the repository. I didn't exhaustively try all combos, but have had to reconstitute my blockchain at least twice because of this problem.
Is there a way to mark fields either as "new" or "deprecated" to get past this that I overlooked? It will be hard to make a case to move a system that can't be changed forward to production.
In the same vein it doesn't seem to like empty or null strings much (at least for participant attributes). Having an "optional" override somewhere would save a lot of extra bounds checking in my application. Is there one of those I missed too?
So you can use the APIs or REST to expose the legacy data? You may be referring to Playground above (its not really a tool for looking at production data, its for model prototyping/sandbox/testing type stuff).
On optional question - can just add that the field is optional in the model - example here -> https://github.com/hyperledger/composer-sample-networks/blob/master/packages/pii-network/models/pii.cto#L20

Achieve Multi-tenancy with GATE

I am using GATE in one of my applications and I have few queries related to Multi-tenancy. My requirements are as given below.
I have the keywords set, specific for each user and depending on
which user is signed in, I need to initialise gazetteer with the
applicable set of keywords.
At a given time there could be multiple users logging into my
application and I want to make sure that the multi-tenancy
approach will not be inefficient.
I don't want to store the keywords for each user in the .lst
file(s) but store it on a DB (mongo) and inject only at the
runtime.
I searched the web for few samples and though I found some thoughts on working with Processing Resource, I have no idea how the performance will be affected.
Your help is much appreciated.
Thanks in advance,
Sajith
That's an interesting use-case for a GATE gazetteer.
One thing I believe you should definitely do is add the user ID as a feature when you're creating the document. This way you'll be able to make your MongoDB query in a processing resource later on.
When you're processing the document, you have several options:
Create a custom PR which calls MongoDB and replicates the DefaultGazetteer code but with overwritten "init" method (or inherit or wrap it, haven't looked into much detail if that's possible). Instead of the default init method you should provide your list of keywords, then set the needed fields and call execute().
If you don't have too many keywords, create a custom PR (or groovy scripting PR) which calls MongoDB and does some simple regex search like the one in this thread.
They also suggest the stringsearch library in the comments. Then just use start and end indices to create Lookup annotations on your own.
You said you don't want that but still, several million words can be handled by both the default and the Hash gazetteer. Although, you should be careful as gate documents could be very memory-intensive if you have too many annotations - in your case Lookups for all user keywords.
Hope this helps.

Documenting Core Data entity attributes with User Info entries

We're looking for a way to document Core Data entities. So far the only real options I've come up with are:
Document externally using UML or some other standard
Create NSManagedObject subclasses for every entity and use code comments
Use the User Info dictionary to create a key value pair that holds a string comment
Option 1 feels like too much extra work and something that will almost certainly be out of date 99% of the time.
Option 2 feels natural and more correct than option 1. The biggest con here is that those comments could potentially be lost if this model class is regenerated using Xcode.
Option 3 feels a little less correct than option 2, but has the added advantage of adding automation possibilities with regards to meta data extraction. For instance, in one of our apps we need to keep a real close eye on what we're storing locally on the device as well as syncing to iCloud. Using the user info dictionary it's pretty easy to automate the creation of some form of artefact which can be checked both internally and externally (by the client) for compliance
So my question is whether it would be inappropriate to use the user info dictionary for this purpose? And are there any other options I'm missing?
Option 2 is what I use every time. If you look at your core data model (something.xcdatamodeld or something.xcdatamodel) you will see something like the picture below.
You can tie your entity to whatever class you want and then put the comments in there. It helps if you keep your entity name the same as your class name to make it obvious what you've done.
Additionally this also gives you the ability to add automation. You can do this by creating custom getters and setters (accessor methods) and a custom description method.
I use option 2 and categories. I'll let XCode generate the NSManagedObject subclasses and use a categorie on each of these subclasses. With the categories I do not loose my changes made in the categories, can document, make custom getter and setters and I am still able to use generated subclasses.
If we speak only about documenting (i.e. writing more or less large amounts of text which is intended to be read by humans) your classes, I'd use the option 2.
If you are concerned with the possibility of Xcode overwriting your classes in the option 2, you may consider creating two classes for each entity: one which is generated by Xcode and always could be replaced (you generally do not touch this file) and one other which inherits from the generated one and in which you put all your customizations and comments.
This two-class approach is proposed by the mogenerator.
Although if you need to store some metadata with the entities which will be processed programmatically, the userInfo is perfectly suitable for this.

How to store/serve user-customizable HTML templates?

I want to offer a tumblr-like functionality where you can select a template, and optionally customize the HTML of your template in the browser and save it.
Current stack: Mongo, Sinatra (for REST API) for prototype. Will likely be moving to a compiled, statically-typed language later.
Wondering how best to accomplish this. Options I've considered:
Store the HTML in mongo, and duplicate it for all user accounts. So the HTML for the template you choose gets written into your account. Obvious cons of space inefficiency and need to update all users that use that template (if un-customized - you customize it it becomes your own and I won't ever touch it) if the template changes.
Store the templates in templates collection, and put custom templates either into this same collection or into the user collection with the owner of the template. User references a template id. This is quite clearly better than 1 I believe. Especially because I won't need to pull the template every time the user object is pulled.
Some third party library? Open to suggestions here.
File system.
I will need to package up these templates (insert js and stuff the user shouldn't be exposed to) and then serve them. Any advice on how best to approach this is greatly appreciated.
Your approach will depend on how often you foresee people customizing the template versus just going with a standard. How about a hybrid approach?
That is, have a field in the user document that is created lazily (on use) that either stores the custom template, or maybe a diff from one of the standards (not sure about the level of customization you are planning to allow).
Then you can have the template field you describe in 2 above, with a "special" setting for custom templates. While you still have the concern about pulling a template each time, you do have the advantage of knowing that these are some of your more dedicated users - saving a trip to the DB might be advantageous, or you might not care.
If you don't care about 2 trips to the DB for every user, then you take approach 2, add the custom templates to the templates collection and simply reference the new ID for each user that customizes.
It's a balancing act - is the extra data overhead in terms of pulling the template each time worth saving a round trip to the DB or do you want efficiency in terms of the data you get each time at the cost of multiple queries to the DB - only you can answer that one based on how you design your app and how people use it.
For the linked approach you might want to take a look at Database References and Schema Design in the MongoDB docs.

Resources