We're hosting on EC2. I've read this article here for provisioning tentacles. Is there a script which will then tell that provisioned server to grab the latest packages (from the latest release of the environment it's provisioned for)?
Skip actions are step related, however I've just traced the POST request and there's a field SpecificMachineIds - So you CAN deploy to a specific machine.
It feels a bit smelly, but you'd have to get the new Id of the machine from the API, and then use that in your deployment request.
EDIT
A quick google on SpecificMachineIds and I have just come across this which is probably what you need
Octopus Deploy Support Question
Related
I am trying to do the MSI web deployment with chef. I have about 400 web servers with same configuration. We will do deployment in two slots with 200 servers each.
I will follow below steps for new release,
1) Increase the cookbook version.
2) Upload the cookbook to server.
3) Update the cookbook version to role and run list.
I will do lot of steps from cookbook like install 7 msi, update IIS settings, update web.configure file and add registry entry. Once deployment is done we need to update testing team, so that they can start the testing. My question is how could I ensure deployment is done in all the machines successfully? How could I find if one MSI is not installed in one machine or one web.config file is not updated properly?
My understanding is chef client will run every 30 Mins default, so I have wait for next 30 mins to complete the deployment. Is there any other way with push (I can’t use push job, since chef is removed push job support from chef High Availability servers) like knife chef client from workstation?
It would be fine, If anyone share their experience who is using chef in large scale windows deployment.
Thanks in advance.
I personnaly use rundeck to trigger on demand chef runs.
According to your description, I would use 2 prod env, one for each group where you'll bump the cookbook version limitation for each group separately.
For the reporting, at this scale consider buying a license to get chef-manage and chef-reporting so you'll have a complete overview, next option is to use a handler to report the run status and send a mail if there was an error during the run.
Nothing in here is specific to Windows, so more you are asking how to use Chef in a high-churn environment. I would highly recommend checking out the new Policyfile workflow, we've had a lot of success with it though it has some sharp limitations. I've got a guide up at https://yolover.poise.io/. Another solution on the cookbook/data release side is to move a lot of your tunables (eg. versions of things to deploy) out of the cookbook and in to a little web service somewhere, than have your recipe code read from that to get their tuning data. As for the push vs. pull question, most people end up with a hybrid. As #Tensibai mentioned, RunDeck is a popular push-based option. Usually you still leave background interval runs on a longer cycle time (maybe 1 or 2 hours) to catch config drift and use the push system for more specific deploy tasks. Beyond RunDeck you can also check out Fabric, Capistrano, MCollective, and SaltStack (you can use its remote execution layer without the CM stuffs). Chef also has its own Push Jobs project but I think I can safely say you should avoid it at this point, it never got enough community momentum to really go anywhere.
I wasn't able to find solid information on this and I wanted to ask developers who use Parse Dashboard:
What are the pros/cons of Parse Dashboard local installation vs deployment?
I currently run the Parse Dashboard on local installation, but I know that deployment to Heroku is also an option (my app is deployed on Heroku). I wanted to gather some information before deploying/not deploying.
Thank you!
I also have it running locally and I think for security reasons it's best to do so. If you setup the dashboard on the same server on which Parse is running, then you will have to take security measure to protect access to the dashboard and the config file which includes your masterkey and all that. This definitely outweighs the arguments to host it locally, which in my opinion only is that it's easier to access the dashboard.
If you really want to setup a dashboard on a server at least do it on a separate server.
I've tried taking a look on Google for how this can be done but I thought I'd post a question anyway to see what the best practice is for doing this nowadays.
We are trying to setup a Team City build to deploy to a clients environment, basically we're generating an artifacts zip file and the plan is to (somehow) deploy this to the clients UAT, Staging and Live Servers (which are password protected). When the build is run it executes a nant script.
From our network in the office we are able to remote into the UAT box, but we can only get to the Staging and Live servers whilst on the UAT box.
What is the best way of doing this? Are there any useful resources I can look at to help me move forward?
You can try Deployer Plugin developed by TeamCity team. It offers SMB/FTP/SSH deploy options as well as SSH Exec option.
We've been experimenting with Octopus Deploy on a development PC and now want to transfer the environment we've created onto our main Octopus Deploy server (which is used by other teams and already has a few environment set up on it).
So we would like to backup/restore this one environment. However, it looks like Octopus only allows you to backup/restore the entire database.
Is it possible to move a single environment from one Octopus server to another using backup/restore or another means?
What worked for me was simply doing the following in order:
Shutting down Octopus service so that no transaction going through.
Copy the raven database (usually stored in Program Files\Data) to your new server.
Install the new Octopus server and during the setup, in the Storage Tab, specify the location of your data location copied in the second step above.
The Octopus developer, Paul, mentions the great thing about RavenDB is the installation. It requires no services running like SQL. It's just a copy paste of the data itself and great for installation and portability.
There's currently no way to backup/restore just part of the database - you'd need to restore a full backup, and then delete the information you don't need.
Octopus 2.0 (which is now a public beta) has a comprehensive REST API so it would be possible to use that API to fetch a subset of information and import it to your new Octopus server.
My office is growing and ive been tasked to build out the IT for our web development.
Whats the best tool/setup for doing web development in a group setting? The requirements are a centralized code repository, a location to test development code on, and finally a way to push tagged code out to a staging server. What im thinking is svn/redmine for code repo, each user has an account on a central development machine to allow for ssh access(eclipse over ssh) and their own virtual host on the dev server which gives everyone a centralized development sandbox. Code is written and tested on this dev box then checked back into svn and later tagged and pushed out to the staging server. Yeah? Thoughts comments or recommendations?
*Also, in a dev environment what is the best way to handle databases? Is it wise to pull from the production database? Also should each developer have his/her own db or work off a master db?
**We are building a magento application and also have some custom backoffice tools that run on cakePHP.
Although this subject is off-topic in StackOverflow and flagged so then you need to concentrate on following areas:
VERSION-CONTROL
GIT has all the glory and you don't need your own box for this as https://bitbucket.org/ offers unlimited data and private/public repos and you can set your codebase there. http://github.com is also powerful and de facto most popular version-control oriented tool out there although it comes for a small price
so your master branches live in your version control and your devs will checkout frpom there and commit to it as well
your deployment tools will deploy data to your live and staging environments from your master
ENVIRONMENTS
usually three are used LIVE, STAGE, DEV
LIVE is well live and only approved code gets deployed there
STAGE is pre-live environment and should be exact replica environment according to LIVE so all things can be tested there by merchant
DEV is cool to have exact replica but can as well be on developers local env and is ment for loose testing and experimenting
DATABASES AND DEPLOYMENT
mysql databases are pain in the ass to sync so you better have a script for it that syncs from live to others and prevent syncing from other environments to LIVE. This limitation also requires that all the configuration and content will be added from LIVE only and only then synced down the line. Every change to schema or permanent setting should be handled by update scripts (As we are talking MAGENTO CE , MAGENTO EE has migration built in)
for deployment I also suggest you to build a fabric or capistrano script that resets dev and staging environments, handles database reset and pull from LIVE DB, and imports code from central repository.
it's also a good idea to target the following everyday tasks:
clients needs to reset the stage for it's tests
project manager, developer or testers need to test so spawning a test clone should be oneclick action (take current db and code and make it live in some subfolder for specific test only) as well as deleting the test
3rd party devs might need access to specific test or dev environment (this is actual with magento as in average there are at least 10 external extensions installed in every magento store)