Lifecycle of Alloy Migrations - appcelerator

I am unable to understand Alloy Migrations, specifically,
1) When is the migration run? On App upgrade or on every App launch?
2) When is migration.down() executed? I would assume Alloy executes all up() chronologically from whatever installed app version is to bring to current version. What is the role of down()?

Every time the app opens anew (so not on resume) it will check for migrations that haven't been run yet (which is tracked in a sqlite table).
The migration.down() would be run when the user somehow upgrades to on older version. This can't happen on App Store deployments, but it might during tests and adhoc/enterprise deployments.

Related

How I could upgrade Grails 2.2.4 version to latest or most current version?

I'm working with a Grails application version 2.2.4 and I need a procedure for upgrade to latest version (I hope it can be possible). I have thought as a first step to follow the indications of the official site, but that let me to upgrade to version 3.
I'd like to know if anyone already did it or have experience about that. How long take it?, the process and the main problems.
Many thanks in advance.
I think you need to follow both upgrade instructions. the one for 3.x and the 4.x.
start with the 3.x and them move to the 4.x changes.
Another approach I think may be better is to start an empty 4.x application and then start moving you code there. also check first that all the plugins that you are you sing have 3+ version.
The effort required to upgrade can change massively depending on multiple factors, including the size of the project, the quality of the original code, were plugins used and if so have they been updated or will the functionality need replacing, were deprecated taglibs used, e.g remoteFunction etc. etc.
There is not a great deal of difference between 3.x and 4.x so it makes sense to upgrade to 4.x.
Tackle it in stages from the basis of a new project, attempting to rebuild the project between stages.
Reestablish configuration, you don't have to use application.yaml (the default in 4.x) so can create an application.groovy with the same parameters as per your old project.
Move over domain objects but use a new database URL, compare the schema's between the old db and new db to ensure the database is the same. Unless you don't rely on GORM to recreate/update the schema.
Move over any other source and command objects ensuring the project will build. You may need to modify buildconfig at this stage to bring in dependencies and plugins.
Move over services, ensure all compiles and make sure transactions are behaving as intending.
Move over controllers ensuring any tests run successfully.
Move over the views.
Hopefully if the project is still building at this stage, you can run it!

Automatic version management using agvtool

Xcode 11 has changed the way that versions are handled.
So far I had two targets, Dev & Prod, each with a separate versions. Prod version would be entirely manual, Dev would be automated:
During a build, a script would run, which then would fetch git tags. One tag would contain information about the latest Dev version. If it's newer, it would update version inside Info.plist just for the Dev target
When Dev would be deployed using a script (create an ipa, resign for in-house distribution, upload), the build version would then be increased. Remote tag containing version information would then be updated
In this way everyone's dev version would get automatically synchronized and managing multiple dev builds would be be easy. Prod would be updated relatively infrequently so it can be managed manually.
However, in Xcode 11 whenver version (or build) is updated inside the General tab, entries in Info.plist are replaced with $(CURRENT_PROJECT_VERSION) and $(MARKETING_VERSION) and Current Project Version and Marketing Version inside the build settings tab is used instead.
So far I would use PlistBuddy in order to read and update versions inside Info.plist, but from what understand now I'd have to use agvtool. However, there are two issues with it:
If it's ran as a Run Script phase, it causes the build process to cancel
It is unable to handle separate versions for two targets (so I cannot just automatically manage Dev, while leaving Prod alone)
I know that theoretically I can still use Info.plist for versioning, but the moment someone changes version manually in the General tab, the whole thing will get messed up (from experience I know that this will happen).
I have two questions:
Is my understanding of the process correct?
Can still have a version management system using agvtool similar to what I had before?

How do I upgrade Android native app to new version based on NativeScript while keeping data from old app?

I want to update an existing Android app (not made with NativeScript), with a new one (made with NativeScript).
The old app stored some user data in a SQLite database. I want this to survive the upgrade.
Now, I have the same app-id in the new app, as the old, so that part is in place. To test if the database survives though, I have started the Android Emulator with the old app, created a few records, then published the native script version using
tns run android --bundle --device=1
and this correctly replaces the old app with the new code, but at the same time it seems to wipe the database, which is otherwise correctly stored in /data/data/app-id/databases
Is this due to the tns deployment for debugging possibly starting out wiping the system, or something else ?
How do you guys test this?
Edit: Apparently the uninstalling after each compile, rather than upgrading is a known thing, tracked in their Github as issue #3382
tns run android --bundle produces a development version of APK which would not match the signature of your production version of APK built with native Android.
If you use the same signing certificate you used for the production native app while running app / building your {N} version of APK, then you will survive the upgrade by default.
So your command may look like
tns [build|run] android --bundle --release --keyStorePath /path/to/keystore --keyStorePassword keystore-password --keyStoreAlias keystore-alias --keyStoreAliasPassword keystore-alias-passwrd
Read more on docs.
Edit: CLI seems to have a known issue with tns run, instead of replacing the APK, it deletes the old version and installs new version. So it should not be a problem when you publish the APK built with tns build. Credits to #DimitarTachev.

How to invoke pre/post deployment scripts during Heroku pipeline promotions

I have a rails app that runs database migrations in a rake task after tests have succeeded and immediately before deploying code to Heroku.
I am using CodeShip to run the tests, run the migrations, and then finally deploy to heroku.
However, I am running into a problem with Heroku's new Pipelines feature.
Upon promoting a version of my app from one environment to another, only the the application slug is copied over to the new environment. No branches are merged or updated in git, and no codeship builds trigger.
Even the heroku build history shows only a promotion entry with no build log associated. Which makes sense since it is just copying the slug over, not building a new slug.
So my problem is that when I promote my app to a new environment, I am not able to find any way of hooking a custom script into that event to perform database migrations.
Main Question
Is there support for this that I am just having trouble finding? If not, is a feature in the works that would support this?
Feature Suggestion
Ideally I would like the promotion feature to work by merging the underlying git branches, that way codeship could still kick off, run all tests and migrations again in the new environment, and then finally trigger the build in the next environment. This would require each environment in the pipeline to be tied to a specific branch, instead of just promoting by commit hash, but I don't think that would be problematic.
Essentially I would like the promote button to just do what we developers would often do when manually promoting a version of our app, merge to a git branch associated with that environment and let our CI server's git hooks kick it off from there.
Post-promotion script can be defined in Procfile with release process type.
release: npm run migrate

composer and satis tags for testing and prod

We're using composer, satis and SVN to manage our in-house PHP libraries.
We commit changes to SVN trunk during development, then tag versions (following semantic versioning) when they're ready for testing.
Once a library version is tagged, we can use composer as part of our deployment to the testing environment. Following successful testing, we'd then deploy that same version to production.
The issue here, is that once we've tagged a version for testing, we have to be very careful as the newly tagged version will be picked up by composer when preparing the next prod release.
What I'm imagining, is that we'd tag a version as a beta or RC, (eg v1.1RC1) and somehow configure our deployment process such that it will refuse to deploy an RC or beta to production. If a version is tested successfully, we'd re-tag that version as a released version (v1.1RC1 -> v1.1) and release that.
Can this be achieved?
From what you are saying, I understand that you are actually afraid of tagging a new version of a library because that code could actually be used and break that other application, right?
One approach would be to do good testing. I don't see it should be a problem to tag a version of a library. If the tests are all green, there should be no reason not to tag it. This would work even if the tests are basically only "let's see if it works, manually".
Now the second step is to integrate that new version into the application: Run composer update and see if the application is still running, i.e. start all the tests and wait for green.
I guess it might be a good idea to have a separate area where you check out the application, intentionally run composer update to fetch all the newest libraries, run all the tests and report that a) there are updates and b) they work. A developer should then confirm the update, i.e. do it again manually and commit the resulting composer.lock file, or grab the resulting lock file from that update test.
I don't think there is benefit in using non-production release versions. You have to deal with the next version anyways - constantly toggling the minimum stability setting or adding #RC or #beta flags to the version requirements of the library don't really help.

Resources