Using Octopus I want to be able to have a set up so that attempting to deploy to a Production environment includes a warning to the user and a manual intervention to continue, to prevent accidental Production deployments.
I don't want this to happen on other environments e.g.
Development > Staging > UAT > Production (only show the warning deploying UAT to Production).
I've been unable to find if there is a way to do this.
You can configure steps to only run in specific environments.
Related
The Heroku docs states ...
Your Heroku app runs in at least two environments:
On your local machine (i.e., development).
Deployed to the Heroku platform (i.e., production)
Ideally, your app should run in two additional environments:
Test, for running the app’s test suite safely in isolation
Staging, for running a new build of the app in a production-like > setting before promoting it
https://devcenter.heroku.com/articles/multiple-environments#managing-staging-and-production-configurations
however, the heroku pipeline interface only offered 'staging' and 'production' as options. How do I create a 'test' stage in my pipeline? Are the docs out of date or am I misunderstanding the functionality?
Right now, I deploy my (Spring Boot) application to EC2 instance like:
Build JAR file on local machine
Deploy/Upload JAR via scp command (Ubuntu) from my local machine
I would like to automate that process, but:
without using Jenkins + Rundeck CI/CD tools
without using AWS CodeDeploy service since that does not support GitLab
Question: Is it possible to perform 2 simple steps (that are now done manualy - building and deploying via scp) with GitLab CI/CD tools and if so, can you present simple steps to do it.
Thanks!
You need to create a .gitlab-ci.yml file in your repository with CI jobs defined to do the two tasks you've defined.
Here's an example to get you started.
stages:
- build
- deploy
build:
stage: build
image: gradle:jdk
script:
- gradle build
artifacts:
paths:
- my_app.jar
deploy:
stage: deploy
image: ubuntu:latest
script:
- apt-get update
- apt-get -y install openssh-client
- scp my_app.jar target.server:/my_app.jar
In this example, the build job run a gradle container and uses gradle to build the app. GitLab CI artifacts are used to capture the built jar (my_app.jar), which will be passed on to the deploy job.
The deploy job runs an ubuntu container, installs openssh-client (for scp), then executes scp to open my_app.jar (passed from the build job) to the target server.
You have to fill in the actual details of building and copying your app. For secrets like SSH keys, set project level CI/CD variables that will be passed in to your CI jobs.
Create shell file with the following contents.
#!/bin/bash
# Copy JAR file to EC2 via SCP with PEM in home directory (usually /home/ec2-user)
scp -i user_key.pem file.txt ec2-user#my.ec2.id.amazonaws.com:/home/ec2-user
#SSH to EC2 Instnace
ssh -T -i "bastion_keypair.pem" ec2-user#y.ec2.id.amazonaws.com /bin/bash <<-'END2'
#The following commands will be executed automatically by bash.
#Consdier this as remote shell script.
killall java
java -jar ~/myJar.jar server ~/config.yml &>/dev/null &
echo 'done'
#Once completed, the shell will exit.
END2
In 2020, this should be easier with GitLab 13.0 (May 2020), using an older feature Auto DevOps (introduced in GitLab 11.0, June 2018)
Auto DevOps provides pre-defined CI/CD configuration allowing you to automatically detect, build, test, deploy, and monitor your applications.
Leveraging CI/CD best practices and tools, Auto DevOps aims to simplify the setup and execution of a mature and modern software development lifecycle.
Overview
But now (May 2020):
Auto Deploy to ECS
Until now, there hasn’t been a simple way to deploy to Amazon Web Services. As a result, Gitlab users had to spend a lot of time figuring out their own configuration.
In Gitlab 13.0, Auto DevOps has been extended to support deployment to AWS!
Gitlab users who are deploying to AWS Elastic Container Service (ECS) can now take advantage of Auto DevOps, even if they are not using Kubernetes. Auto DevOps simplifies and accelerates delivery and cloud deployment with a complete delivery pipeline out of the box. Simply commit code and Gitlab does the rest! With the elimination of the complexities, teams can focus on the innovative aspects of software creation!
In order to enable this workflow, users need to:
define AWS typed environment variables: ‘AWS_ACCESS_KEY_ID’ ‘AWS_ACCOUNT_ID’ and ‘AWS_REGION’, and
enable Auto DevOps.
Then, your ECS deployment will be automatically built for you with a complete, automatic, delivery pipeline.
See documentation and issue
Hey guys so I've spend the past few days really digging into Docker and I've learned a ton. I'm getting to the point where I'd like to deploy to a digitalocean droplet but I'm starting to wonder about the strategy of building/deploying an image.
I have a perfect Dev setup where I've created a file volume tied to my app.
docker run -d -p 80:3000 --name pug_web -v $DIR/app:/Development test_web
I'd hate to have to run the app in production out of the /Development folder, where I'm actually building the app. This is a nodejs/express app and I'd love to concat/minify/etc. into a local dist folder ane add that build folder to a new dist ready image.
I guess what I'm asking is, A). can I have different dockerfiles, one for Dev and one for Dist? if not B). can I have if statements in my docker files that would do something like... if ENV == 'dist' add /dist... etc.
I'm struggling to figure out how to move this from a Dev environment locally to a tightened up production ready image without any conditionals.
I do both.
My Dockerfile checks out the code for the application from Git. During development I mount a volume over the top of this folder with the version of the code I'm working on. When I'm ready to deploy to production, I just check into Git and re-build the image.
I also have a script that is executed from the ENTRYPOINT command. The script looks at the environment variable "ENV" and if it is set to "DEV" it will start my development server with debugging turned on, otherwise it will launch the production version of the server.
Alternatively, you can avoid using Docker in development, and instead have a Dockerfile at the root of your repo. You can then use your CI server (in our case Jenkins, but Dockerhub also allows for automated build repositories that can do that for you, if you're a small team or don't have access to a dedicated build server.
Then you can just pull the image and run it on your production box.
I have tried deploying my meanjs on heroku.
I forked this https://github.com/meanjs/mean
1.) Login to heroku
2.) Deploy and connect github repositor
enabled automatic deploy CI
Click on Manual Deploy
On the build log it says "Bulid succeded"
My question is.
Why am I getting this application error?
When all I did was forked the repository and deployed it on heroku?
https://serkolgame.herokuapp.com/
Did you add a mongoDB to your app? Without it, the startup process is likely to fail.
Here are some options:
If you are using the default dev environment - then just add the mongodb connection string in development.js and restart the server
If you are using the prod environment - then you can use environment variables uri: process.env.MONGOHQ_URL or process.env.MONGOLAB_URI
This is assuming you have a mongoDB sandbox setup somewhere, if you don't, you'll first need a mongodb sandbox (get one from Heroku, Compose.io, or MongoLab).
I'm trying to set up multiple roles, one for live, and another for dev. They look like this:
role :live, "example.com"
role :dev, "dev.example.com"
When I run cap deploy, however, it executes for both servers. I've tried the following and it always executes on both.
cap deploy live
cap ROLE=live deploy
What am I missing? I know I can write a custom task that only responds to one role, but I don't want to have to write a whole bunch of tasks just to tell it to respond to one role or another. Thanks!
Capistrano Multistage is definitely the solution to the example you posted for deploying to environments. In regard to your question of deploying to roles or servers, Capistrano has command-line solutions for that too.
To deploy to a single role (notice ROLES is plural):
cap ROLES=web deploy
To deploy to multiple roles:
cap ROLES=app,web deploy
To deploy to particular server (notice HOSTS is plural):
cap HOSTS=web1.myserver.com deploy
To deploy to several servers:
cap HOSTS=web1.myserver.com,web2.myserver.com deploy
To deploy to a server(s) with a role(s):
cap HOSTS=web1.myserver.com ROLES=db deploy
You can do something like this:
task :dev do
role :env, "dev.example.com"
end
task :prod do
role :env, "example.com"
end
Then use:
cap dev deploy
cap prod deploy
Just one more hint: if you use multistage remember to put ROLES constant before cap command.
ROLES=web cap production deploy
or after environment
cap production ROLES=web deploy
If you put as first parameter, multistage will treat it as stage name and replace with default one:
cap ROLES=web production deploy
* [...] executing `dev'
* [...] executing `production'
Try capistrano multistage:
http://weblog.jamisbuck.org/2007/7/23/capistrano-multistage
Roles are intended to deploy different segments on different servers, as apposed to deploying the whole platform to just one set of servers.