Back/restore single environment in Octopus Deploy - octopus-deploy

We've been experimenting with Octopus Deploy on a development PC and now want to transfer the environment we've created onto our main Octopus Deploy server (which is used by other teams and already has a few environment set up on it).
So we would like to backup/restore this one environment. However, it looks like Octopus only allows you to backup/restore the entire database.
Is it possible to move a single environment from one Octopus server to another using backup/restore or another means?

What worked for me was simply doing the following in order:
Shutting down Octopus service so that no transaction going through.
Copy the raven database (usually stored in Program Files\Data) to your new server.
Install the new Octopus server and during the setup, in the Storage Tab, specify the location of your data location copied in the second step above.
The Octopus developer, Paul, mentions the great thing about RavenDB is the installation. It requires no services running like SQL. It's just a copy paste of the data itself and great for installation and portability.

There's currently no way to backup/restore just part of the database - you'd need to restore a full backup, and then delete the information you don't need.
Octopus 2.0 (which is now a public beta) has a comprehensive REST API so it would be possible to use that API to fetch a subset of information and import it to your new Octopus server.

Related

Why do some Jelastic providers block Export Environment option

According to Jelastic documentation it is possible to export the Environment configuration and download it so it can be restored in another provider
However I have tried with 2 Jelastic providers and they both have disabled the option for exporting private data.
So exporting/download/upload/import of environment is not possible.
i.e. I was expecting to have a process similar to CPanel backup/restore tool
In fact, another view for the deployment process gives a possibility to get rid of the model of handling the data or configuration on the platform. Try to think a bit differently and using CI/CD approach. The Jelastic provides a platform where something you created, locate on somewhere you're elaborating(VCS or GIT as an example) and based on or depends on the specific stack, already pre-configured like a layer and can be installed(copied) over Jelastic. Don't need to handle the data somewhere in the cloud because you have it locally(means within any VCS) and doing the changes there. Then just do a 'pull' procedure(manually or automatically) on that deployment(test, production, staging) environment you're expecting.
Moreover, you can expect any environments type like a code and perform it creating before deploying the data.
Please, find the articles being described each case:
Deployment Guide
Jelastic Packaging Standard for CI/CD Automation
In case you would like to handle the databases' backups, check this article:
Scheduling Database Backups
Additional FTP add-on can make the copies more easily for each instance:
FTP/FTPS Support in Jelastic

Automatically pull packages on provisioning

We're hosting on EC2. I've read this article here for provisioning tentacles. Is there a script which will then tell that provisioned server to grab the latest packages (from the latest release of the environment it's provisioned for)?
Skip actions are step related, however I've just traced the POST request and there's a field SpecificMachineIds - So you CAN deploy to a specific machine.
It feels a bit smelly, but you'd have to get the new Id of the machine from the API, and then use that in your deployment request.
EDIT
A quick google on SpecificMachineIds and I have just come across this which is probably what you need
Octopus Deploy Support Question

How can I use TeamCity to do Production releases safely?

We currently use TeamCity to build a deployment artifact, then a further TeamCity task takes that artifact and deploys it to our development and testing servers on demand.
We can store the passwords and other secret data in properties files that we can check into source control, as these are all internal servers and the developers have full access to them.
However for release to Production (and our final test layer) there are secret passwords and configuration that we don't want checked into the normal source control, or to have development be able to discover the passwords. So to do 'real' deployments we have to hand the artifact over to another team and they maintain a properties file with the production values.
What methods exist to store these secrets and allow TeamCity to run a deploy without ever leaking the secrets out?
(note I am one of the devs and it is not a trust issue... I don't want to have the ability to find out prod passwords so I can never accidently know them and do some horrific damage!)
Probably what you need here, is to create a separate project with narrower scope of permissions (for example, allow only certain people to edit build configurations). In this project create a build configuration, responsible for deployment. In this configuration, you can define a Typed Parameter of type 'password' to store the password to the production environment.
Another option is to use Deployer Plugin, especially its ability to deploy over ssh with private key authentication
If you are OK to use a third party solution, consider using a solution like CloudMunch which can help you to perform release management functions with these secure parameters collected at deploy time and encrypted post deployment.
Disclaimer: I work with CloudMunch
You can do 2 things.
Use a teamcity project to deploy artefacts for production only. This will only be accessible to ops members.
Teamcity also supports running agents with different user ids. You can create a new user id which can have access to the production "secrets" (passwords and configuration). Use this id to run the targets in the 1st step.

Teamcity server migration - agent changes

I'm planning to migrate my teamcity server onto a new physical location. The process is pretty straight forward, export the database, install a vanilla teamcity server and import the database via maintaindb.sh.
Since i have a large installation I decided to backup only server settings, projects and builds configurations, plugins. My point was that I can manually move build logs and artifacts later (i'd rather not try to restore from a 500GB zip file). However after importing the backup I was unable to see any build agents in the agent pool.
Any ideas? Do you have to install each build agent from scratch just because the server got migrated to a new location? Do you just have to point the agents to the new server and that's it (and if so why does the agent pool on the server seem empty)
Thanks,
If you are changing the servers URL in your migration, which from your question I am assuming that you are, then you will need to edit each build agent's properties.
In your ~TeamCity\Install\buildAgent\conf, you will have a buildAgent.properties file. You need to modify this file to point to your new Teamcity location via the serverURL value. Then you will want to restart the build agent server, and authorize and enable the build agent from your Teamcity interface.
There is an extremely brief explanation of this here at the bottom of the "Move TeamCity Installation to a New Machine" section.
And to answer your question as to why the agent pool seems empty - it is because the agent is not looking for the server at it's new location.

Setting up a collaborative environment for web application development

My office is growing and ive been tasked to build out the IT for our web development.
Whats the best tool/setup for doing web development in a group setting? The requirements are a centralized code repository, a location to test development code on, and finally a way to push tagged code out to a staging server. What im thinking is svn/redmine for code repo, each user has an account on a central development machine to allow for ssh access(eclipse over ssh) and their own virtual host on the dev server which gives everyone a centralized development sandbox. Code is written and tested on this dev box then checked back into svn and later tagged and pushed out to the staging server. Yeah? Thoughts comments or recommendations?
*Also, in a dev environment what is the best way to handle databases? Is it wise to pull from the production database? Also should each developer have his/her own db or work off a master db?
**We are building a magento application and also have some custom backoffice tools that run on cakePHP.
Although this subject is off-topic in StackOverflow and flagged so then you need to concentrate on following areas:
VERSION-CONTROL
GIT has all the glory and you don't need your own box for this as https://bitbucket.org/ offers unlimited data and private/public repos and you can set your codebase there. http://github.com is also powerful and de facto most popular version-control oriented tool out there although it comes for a small price
so your master branches live in your version control and your devs will checkout frpom there and commit to it as well
your deployment tools will deploy data to your live and staging environments from your master
ENVIRONMENTS
usually three are used LIVE, STAGE, DEV
LIVE is well live and only approved code gets deployed there
STAGE is pre-live environment and should be exact replica environment according to LIVE so all things can be tested there by merchant
DEV is cool to have exact replica but can as well be on developers local env and is ment for loose testing and experimenting
DATABASES AND DEPLOYMENT
mysql databases are pain in the ass to sync so you better have a script for it that syncs from live to others and prevent syncing from other environments to LIVE. This limitation also requires that all the configuration and content will be added from LIVE only and only then synced down the line. Every change to schema or permanent setting should be handled by update scripts (As we are talking MAGENTO CE , MAGENTO EE has migration built in)
for deployment I also suggest you to build a fabric or capistrano script that resets dev and staging environments, handles database reset and pull from LIVE DB, and imports code from central repository.
it's also a good idea to target the following everyday tasks:
clients needs to reset the stage for it's tests
project manager, developer or testers need to test so spawning a test clone should be oneclick action (take current db and code and make it live in some subfolder for specific test only) as well as deleting the test
3rd party devs might need access to specific test or dev environment (this is actual with magento as in average there are at least 10 external extensions installed in every magento store)

Resources