Combining travis and ec2 - amazon-ec2

I have a github project that uses travis for continuous integration. I would like to deploy my project on amazon ec2. In order to simplify deployment, I would like the deployed system to have the same configuration as the test system. Is this possible?
AFAICT, this requires two things: First, an ec2 preconfigured instance that matches the settings used by travis. Does one exist? And second, a way to execute travis.yml scripts from the command line. How can I do that?

As for executing .travis.yml scripts from the command line, if I were you I would instead take it the other way around and replace your .travis.yml script with something like this:
language: bleh
etc etc...
install:
- ./travis-scripts/install.sh
before_script:
- ./travis-scripts/before_script.sh
script:
- ./travis-scripts/script.sh
Of course, you will still have to write a script for installing whatever language versions, Travis plugins etc you need on your Amazon EC2 instance.
As for an Amazon EC2 instance that matches Travis VMs, I don't know about that because I'm not so familiar with Amazon AWS, but I can tell you that Travis VMs are based on Ubuntu 12.04, and there is a lot more specific information in the page about The Build Environment.

So you want something on a EC2 instance that can read your .travis.yml file and configure it in the same way that travis does when it tests?
I think that's a pretty long shot for a relatively simple problem like this. Travis is an integration and testing platform that uses a lot of other systems (like chef and docker) to do what it does with the .yml files. To use this system to run a single app sounds a bit overkill.
I would recommend using chef (or similar like puppet) to configure your production environment and deploy your app.
You could have one chef recipe that configures the production environment (DB's, configuration files, install stuff, etc...) and another that deploys, configures and starts your app. When you want to make changes to the production environment, you make changes to these files. They can easily be bundled with the project.

Related

How should I deploy the result of the CI/CD pipeline on my production server

I am having this GitLab CI/CD which builds then tests and pushes my projects container to GitLab container register successfully. But now I am wondering how I can do the deployment stage automated too. currently, I am doing it manually and after each successful pipeline, I SSH to my server and run several commands to pull the latest images from the GitLab.com container registry and then run them. But I would like to make this step automated as well. Yet, I don't know how?
Actually I have seen some examples of opening an ssh session from CI/CD pipeline but it doesn't feel secure enough. So I was wondering is there a better way for this or I have to actually do this.
Not that I am using gitlab.com so the GitLab server is not installed on my machine and I can't share assets between them directly
There are many ways to achieve this, depending on your setup, other requirements, scale etc.
I'll just give you two options.
I. Kubernetes
create cluster (ie control plane) somewhere
add your cluster to GitLab (now GitLab can even create cluster for you in AWS and GCP, check this page)
attach your target machine as a worker node to the cluster
create Kubernetes YAML files \ Helm chart for your application and deploy via usual ways, e.g. kubectl apply -f ... or helm install ..., or rely on Auto DevOps to do this step for you
This is quite complex but sort of "right" way of doing things.
II. Private GitLab runner
go to Settings > CI/CD > Runners of your GitLab project or group
obtain the registration token
install your own GitLab runner right on the target machine, and register it on the GitLab server using registration token, see example
give runner some specific tag
use that tag in your .gitlab-ci.yml file, see documentation
then deployment process is just a local process of docker pull... and docker run ... for your image
This is a lot simpler, but is a "wrong" way, as you are mixing CI\CD infrastructure with target environment.

CI based on docker-compose?

I am currently building a little application that requires that some massively annoying software is installed and running in backround. To ease the pain of developing, I wrote a set of docker-compose files that runs the necessary daemons, creates some jobs, and throws in some test data.
Now, I'd like to run this in a CI-like manner. I currently have Jenkins check all the different repositories and execute a shell script that calles docker-compose up --abort-on-container-exit. That gets the job done, but it seems like a hack, and I'm not such a huge fan of Jenkins.
What I want to ask is: is there a more beautiful way of doing this? Specifically, is there a CI that will
watch a set of git repositories,
re-execute docker-compose (possibly multiple times with different sets of parameters), and
nicely collect and split the logs and tell me which container exactly failed how?
(Optionally) is not some cloud service but installable on my local server?
If the answer to this is "write a Jenkins module", then fine, so be it.
I'm aware that there are options like gitlab-ci, but I'd like to keep the CI script in a fashion that can also be easily executed during development, before pusing to a repo.

Knifing outside a chef run from a node

I have a Jenkins server that I want to deploy some code to some servers. To pick the right servers, I would like the jenkins job to query chef for nodes with a particular role.
However, I am not sure if that is a good idea or an anti-pattern, and I am not sure how to go about it in practice.
The jenkins server is already listed as a non-admin client, so I am wondering if I can use the existing credentials for something or if I should create a jenkins admin and set up a knife.rb in Jenkins home.
You would probably want to use one of the Chef scripting libraries like chef-api (Ruby), PyChef (Python), or Jclouds (Java) rather than knife itself. Using Jenkins for deploys is a bit wonky as it isn't reeeeally meant for that, but you can make it work. Tools like Push Jobs, Fabric, and RunDeck are possibly better suited, and all have direct integration with Chef's node catalog like you describe.

How can Puppet fit into a Continuous Delivery tool chain?

I'm investigating Puppet as our future deployment and provisioning tool in our shop, but now I'm stuck at how to make a clever Continuous Integration/Delivery tool chain with deployment through Puppet.
In any of our environments (dev, test, qa, demo, prod) we have a range of components. We need to be able to deploy each component separately and possibly even concurrently.
I'd like a way to initiate (through script) a deploy of a single component package (=Puppet module) and gather the output and success status of that.
Simply waiting for a scheduled agent pull, or doing a 'puppet agent --test' on each node on the environment isn't good enough, because it may pick up other pending changes (I don't know if another component is also in the process of being deployed).
In my tool chain I would like the deployment output and status from component A and component B to be recorded separately and not mixed up.
So my question is: Can I use puppet to deploy one single named package (module) at a time?
And if not, where did I take a wrong turn when I drove down this path?
I realise a master-less Puppet set-up with modules and manifests replicated to each node perhaps could do it, but IMHO a master-less Puppet set-up kind of defeats the purpose of Puppet.
PS: I think what I'm trying to achieve is called 'Directed Orchestration' in Damon Edwards' very enlightening video at Integrating DevOps tools into a Service Delivery Platform (at timestamp around 22:30).
So my question is: Can I use puppet to deploy one single named package (module) at a time?
Yes, you can, via puppet apply. First you need to create a moduledir and a module that will contain your manifests. e.g. :
/scratch/user/puppet/local/ # This is your modulepath for local deployment
# Following contains the manifests for a module name "localmod"
/scratch/user/puppet/local/localmod/manifests/init.pp
# example content of init.pp
class localmod {
notify{"I am in in local module....":}
}
On that local machine you can test this module via puppet apply :
puppet apply -v --modulepath=/scratch/user/puppet/local -e "include localmod"
echo $? # Get the exit status of the above command
I watched the video at the point your video. There are two types of automation you can do.
Application build/deploy automation, which can be achieved via maven/ant (Build) and ant/capistrano/chrome/bash/msdeploy (Deploy) or as termed on that slide "Installer".
System/Infrastructure automation can be achieved via Chef/Puppet/CFEngine.
This question seems to be ... "How do I do applications build using puppet (implied as a system automation tool)"
So quite simply, oval tool in round hole. (I didn't say square)
At my company, we use Jenkins and the Build Pipeline Integration plugin to build massive multi component projects. As an example, a Java app will use ant in a build job, the next chained job will be a "deploy to dev" job which uses Capistrano to deploy the application, then the next job in the chain is "Configure Dev" which calls Chef to update the system configurations in the DEV environment. Chef is used to configure the application. Each of these jobs can be set to run automatically and sequentially.
a master-less Puppet set-up kind of defeats the purpose of Puppet.
Only if you discount
The rich DSL puppet has to offer
So many peer reviewed community modules
Otherwise, something like this gives you remote directed orchestration.
#update manifests etc (version control is the source of truth)
ssh user#host git pull
#run puppet
ssh user#host sudo puppet-apply

Create stand-alone system services in Ruby

I want to build application which servers as a stand-alone system service, always run on the backend and servers a front-end with a web interface.
Like we do in Linux /etc/init.d/apache2 start , Same as I want to server my application /etc/init.d/myapp start.
My major target is to deliver on Linux specially Ubuntu, whole app would be in plain Ruby and front-end would be in Sinatra.
I want to make it install with simple, gem install my_app and command line features get available to start the service. The application would be doing heavily processing and database insertion. And I want that its configurations must be set as in pure linux fashion, like /etc/apache2/apache2.conf
Can any one guide me in it? Also if possible, i want to secure the code, is there any possibilities for it?
I am using the Daemon-Kit gem for the same requirements. Works very well in production. The only thing it does not include is the configuration with a .conf file, but it's easy to do it yourself with Ruby code. You can deploy with Capistrano, no need to install.

Resources