Lotus Domino: Migrating code changes to production, in clustered environment - cluster-computing

We have clustered environment for domino server on production. I want to migrate code changes from staging to production. I have not changed signature for any of the old functions in the script library, but I have added a new function in the script library which is being called by a specific agent. All works well in staging. Now I want to transfer these changes to the cluster(consists of two servers) in production.
If I copy paste the new function(in script library) and also the changed agent which call this new function to one of the server in production, will these code changes automatically be replicated to the other server?. I mean what's the best way to migrate these changes?.
Thanks in advance.

Data and design elements get replicated immediately between clustered servers. So, if you change an agent or script library on first server the second server gets changes only seconds after.
Sometimes you get an error message "Error loading USE or USELSX module" after changing a script library. The error occurs if you call an agent or open a form which uses the script library. In this case, you have to recompile the agent or form to work the design elements properly with the new internal structure of the script library.
This error won't probably appear in your case as your changes work well in development environment. You should test all parts of your application which use the changed script library though to make sure it will work fine.

If you really want to make it seamless:
1) make your staging database a master template, and
2) make your production database inherit the design from that master template.
Then, on one of your production databases, Application > Refresh Design, and it'll ask what server to refresh the design from. Make this your staging server.
It's particularly important to recompile all LotusScript if you don't do this; otherwise, you may end up with "Type mismatch on external name: ". If you do this on your staging server, both the uncompiled and the compiled LotusScript design documents will be part of the design refresh, and it'll make things a lot easier.
Note that all clients must completely close and reopen the database to recognize any code changes. (This means 'the database tab itself, as well as any documents that are open from that database'.)

Related

Laravel Restore a Backup

I'm fairly new to server administration. I have my Laravel app up and running and I want to make sure it has proper backups. I have researched some backup packages and I have settled on https://github.com/spatie/laravel-backup.
However, once the server fails, I need to know how to use the most recent backup (which will be on AWS S3) to restore the database on the rebuilt server. Are there any suggestions for guides on how to do this? I can't seem to find any unless it doesn't really require much learning and instead just a couple mySQL commands.
Thanks!
I would use replication and within Laravel i would try to switch connection to the replica database server so things can run smoothly until the problem is resolved.
Take a look at this Cross-Region Replication
A typical production environment is automatically running backups on most important things that your deployment needs in order to recover from a failure. Those parts would commonly be your database and storage folder, and configuration files.
Also when you deploy a laravel application there aren't many things that are "worth" backing up , you can choose the entire disk to be mirrored somewhere or you can schedule a backup script which run every N times and backups the things that are more important to your application.
Personally i wouldn't rely on an package from laravel to handle my backups , you can always use other backup utilities, replication and so on.
Update
Take a look at the link below:
User Guide » Amazon RDS DB Instance Lifecycle » Backing Up and Restoring
Backing Up and Restoring
You can call the API function RestoreDBInstanceFromDBSnapshot as showed on example.
But i don't think something automated exists that would auto restore or magically make everything work, you need to do a lot of security checks if something like that would even be attempted. Final word i believe a good solution manually entering or sending the request would be the most solid solution.

Development versus Production in Parse.com

I want to understand how people are handing an update to a production app on the Parse.com platform. Here is the scenario that I am not sure about.
Create an called myApp_DEV. The app contains a database as well as associated cloud code.
Once testing is complete and ready for go-live I will clone this app into myApp_PRD (Production version). Cloning it will copy all the database as well as the cloud code.
So far so good.
Now 3 months down the line I want have added some functionality which includes adding some cloud code functions as well as adding some new columns to the tables in the db.
How do I update myApp_PRD with these new database structure. If i try to clone it from my DEV app it tells me the app all ready exists.
If I clone a new app (say myApp_PRD2) from DEV then all the data will be lost since the customer is all ready live.
Any ideas on how to handle this scenario?
Cloud code supports deploying to production and development environments.
You'll first need to link your production app to your existing cloud code. this can be done in the command line:
parse add production
When you're ready to release, it's a simple matter of:
parse deploy production
See the Parse Documentation for all the details.
As for the schema changes, I guess we just have to manually add all the new columns.

Will an DML process get affected if the application is upgraded or replaced at the same time?

Situation
Oracle APEX (version not specified)
Single Application
Administration Issue: Deployment of New App version.
Detail
The latest version is on Server1
End Users are actively working on an older version hosted on Server2.
How do I import the changes made on Server1 without impacting users who may still be working on Server2?
Some Basics on Deploying APEX App Upgrades
It's always good etiquette to warn users that an upgrade will be in progress. Give a few days advanced notice and a window of time you will need to accomplish the task. In this case, as I will explain, you can install your new upgrade and switch over to the new version quickly.
Use an Application Alias
Use an Application ALIAS to identify your application to get away from the arbitrary, sequence controlled ID.
This is where to identify an APP ALIAS
In this example, the Alias AND the ID can be used. I recommend to publish the ALIAS to the users and the support staff who make the little shortcut icons on everyone's desktop:
http://sakura.apex-server-01.guru:8080/apex/f?p=ALIAS
Where "ALIAS" is whatever you've assigned to the app (such as 'F_40788'). Aliases must be unique across an entire INSTANCE, or you can set up some clever redirects using Oracle's RESTful Web Service builder.
How to Switch Your Live Application to Maintenance Mode
The best way to avoid any unwanted DML or user activity from end users is to lock the front-end application right before you switch over to the new version.
This will prevent anything from changing the state of the data during the upgrade... in answer to the question, if a DML (insert, update, delete) activity initiates when the app is overwritten, either the transaction fails and rolls back because it didn't reach the COMMIT step.. or worse. You're better off just locking up for a few minutes.
How to Set an Application to Maintenance Mode
Rename your current version to the permanent ALIAS and archiving the one it replaced. It's better not to overwrite or immediately delete your older versions.
Multiple Versions Co-existing in the same Workspace:
It is equally as useful to check in the exported application definition scripts as they are encoded in UTF-8 plain text SQL. The benefit is that source code diffs can identify the differences between ver
As long as their access is restricted, and their alias changed to a unlisted value, they serve as a good fallback for any unanticipated issues with the new, current release.

Heroku: Can I commit remotely

We have a CMS on heroku, some files were generated by the CMS, how can I pull those changes down? Can I commit the changes remotely and pull them down? Is there an FTP option of some kind?
See: https://devcenter.heroku.com/articles/dynos#ephemeral-filesystem
It's not designed for persistent file generation and usage.
In practice, it works like this: User puts some code into a repository. That code is dynamically pulled into temporary Amazon EC instances and executed. The code can be pulled from virtual machine to virtual machine, node to node, without disruption, across data centers. There is no real "place" to get the products of your code from the environment, because anything generated by the checked-out code can (and will) be destroyed as your code deploy skips around between the temporary machines.
That being said, there are some workarounds:
If your app includes something like a file browser within your deployed code, you can grab the (entirely) temporary files using that file browser, and commit it back to your persistent code trunk.
Another option is using something like S3 for your persistent storage, with your application reading from, and writing to, a data storage service, knowing that while heroku will just re-write and destroy your local data on a frequent basis, the external service will maintain the files.
Similarly, you can change your application to use heroku's postgres for persistent data storage, or use Amazon's RDS, (etc.).
Alternately, you can edit your application in such a way as to ensure that any files generated by it will be regenerated every time the code is refreshed, redeployed, and moved around.

VS2013 Web Deploy Replace from Server error

I have deployed web applications using web deploy on to iis7.5 without issues, the preview works and It updates only the necessary files when publishing again. We have designers who like to change the css files with FTP and I thought the Replace fileName from server commands in Visual Studio would be great to pull their changes into TFS.
Every time I run it it comes up with the error :
The synchronization is being stopped because the maximum number of sync passes '5' has been exceeded even though all the changes could not be applied. This could occur if there are external changes being made to the destination.
If anyone could shine some light on the error or some documentation regarding this feature, that would be great.
Web Deploy does at least 2 passes to do a remote sync (when either the client or server is remote which is in your case too). At the end of these passes, web deploy does a metadata check to see if all the files are in sync.
If by then other changes have happened (such as someone else started a web deploy sync to the same destination, or a few files were edited via the web or via ftp or any other means) then web deploy will attempt a 3rd pass to get them in sync with the source. If the changes keep happening the passes will keep happening.
But since we dont want to sync the content for ever we placed a max retry limit of 5. You can actually override to something higher but its not recommended.
Update:
You can set this in two ways:
pass in a flag -retryAttempts=7 (or any number) to msdeploy from command line
Set RetryAttemptsForDeployment in VS targets or use it as an MSBuild property. Its described here

Resources