Managing mulitple apps/ecosystem.config.js files with pm2 - ansible

I am building a project which will live on a single server that will contain multiple services running side by side. I am using ansible to provision the server to automate setting everything up.
Services running:
Headless CMS
Database
Other nodejs API etc...
If in the future I would need to scale this project up, I would then want to separate the above services out onto their own servers which has led me to creating separate ansible roles for each of the above services.
My Question:
I am having real difficulty in working with pm2 to get my 2 nodejs apps running with each other.
I know that I can have a single ecosystem.config.js file containing multiple apps which would fit my current architecture (everything hosted on a single server). However this would be a pain later down the road if I were to switch one of my ansible roles to its own server.
Is there a way to deploy to production my nodejs related apps using pm2 management but in a way where they have their own configuration files and systemd service which I can define in ansible?
If I have multiple ecosystem.config.js files for each nodejs app, can pm2 mangage these with the default systemd service it offers when running:
pm2 startup
Or should I just write my own separate systemd services which I could then manually install in each ansible role through templates?
I'm really lost here and have spent so much time trying to work out the best approach to take so any help would be great!!

After doing some more research on this matter I came across this super helpful thread on pm2's github. Basically it seems that for automation using systemd services is the way to go and not bother with pm2's startup (which does create a systemd service, but more complex to manage when using automation software such as ansible.)
I strongly recommend you read it if you stumble accross this question!
https://github.com/Unitech/pm2/issues/2914

Related

How to Test Gol App Engine apps locally on Win 10 and use app.yaml

In Google's latest docs, they say to test Go 1.12+ apps locally, one should just go build.
However, this doesn't take into account all the routing etc that would happen in the app engine utilizing the app.yaml config file.
I see that the dev_appserver.py is still included in the sdk. But it doesn't seem to work in Windows 10.
How does one test their Go App Engine App locally with the app.yaml. ie: as an actual emulated app engine app.
Thank you!
On one hand, if your application consists of just the default service I would recommend to follow #cerise-limón comment suggestion. In general, it is recommended for the routing logic of the application to be handled within the code. Although I'm not a Go programmer, for single service applications that use static_files and static_dir there shouldn't be any problems when testing the application locally. You might also deploy the new version without promoting traffic to it in order to test it as explained here.
On the other hand, if your application is distributed across multiple services and the routing is managed through the dispatch.yaml configuration file you might follow two approaches:
Test each service locally one by one. This could be the way to go if each service has a single responsibility/functionality that could be tested in isolation from the other services. In fact, with this kind of architecture the testing procedure would be more or less the same as for single service applications.
Run all services locally at once and build your own routing layer. This option would allow to test applications where services needs to reach one another in order to fulfill the requests made to them.
Another approach that is widely used is to have a separate project for development purposes where you could just deploy the application and observe it's behavior in the App Engine environment. As for applications with highly coupled services it would be the easiest option. But it largely depends on your budget.

How do I manage micro services with DevOps?

Say I have a front end node and three backed nodes tools, blog, and store. Each node communicates with the other. Each of these nodes have their own set of languages and libraries, and have their own Dockerfile.
I understand the DevOps lifecycle of a single monolithic web application, but cannot workout how a DevOps pipeline would work for microservices.
Would each micro-service get its own github repo and CI/CD pipeline?
How do I keep the versions in sync? Let's say the tools microservice uses blog version 2.3. But blog just got pushed to version 2.4, which is incompatible with tools. How do I keep the staging and production environments in sync onto which version they are supposed to rely on?
If I'm deploying the service tools to multiple different servers, whose IP's may change, how do the other services find the nearest location of this service?
For a monolithic application, I can run one command and simply navigate to a site to interact with my code. What are good practices for developing locally with several different services?
Where can I go to learn more?
Would each micro-service get its own github repo and CI/CD pipeline?
From my experience you can do both. I saw some teams putting multiple micro-services in one Repository.
We where putting each micro-service in a separate repository as the Jenkins pipeline was build in a generic
way to build them that way. This included having some configuration files in specific directories like
"/Scripts/microserviceConf.json"
This was helping us in some cases. In general you should also consider the Cost as GitHub has a pricing model
which does take into account how many private repositories you have.
How do I keep the versions in sync? Let's say the tools micro-service uses blog version 2.3. But blog just got pushed to version 2.4, which
is incompatible with tools. How do I keep the staging and production
environments in sync onto which version they are supposed to rely on?
You need to be backwards compatible. Means if your blogs 2.4 version is not compatible with tools version 2.3 you will have high dependency
and coupling which is going again one of the key benefits of micro-services. There are many ways how you get around this.
You can introduce a versioning system to your micro-services. If you have a braking change to lets say an api you need to support
the old version for some time still and create a new v2 of the new api. Like POST "blogs/api/blog" would then have a new api
POST "blogs/api/v2/blog" which would have the new features and tools micro-service will have some brige time in which you support
bot api's so it can migrate to v2.
Also take a look at Semantic versioning here.
If I'm deploying the service tools to multiple different servers, whose IP's may change, how do the other services find the nearest
location of this service?
I am not quite sure what you mean here. But this goes in the direction of micro-service orchestration. Usually your Cloud provider specific
service has tools to deal with this. You can take a look at AWS ECS and/or AWS EKS Kubernetes service and how they do it.
For a monolithic application, I can run one command and simply navigate to a site to interact with my code. What are good practices
for developing locally with several different services?
I would suggest to use docker and docker-compose to create your development setup. You would create a local development network of docker
containers which would represent your whole system. This would include: your micro-services, infrastructure(database, cache, helpers) and others. You can read about it more in this answer here. It is described in the section "Considering the Development Setup".
Where can I go to learn more?
There are multiple sources for learning this. Some are:
https://microservices.io/
https://www.datamation.com/applications/devops-and-microservices.html
https://www.mindtree.com/blog/look-devops-microservices
https://learn.microsoft.com/en-us/dotnet/standard/microservices-architecture/multi-container-microservice-net-applications/multi-container-applications-docker-compose

Kubernetes CI/CD pipeline

My company has decided to transition to a micro/service based architecture.
We have been doing a bunch of research for the last couple of months on exactly what the architecture of this thing is going to look like.
So far, we've settled on:
Dotnet core for service development (although being language agnostic is somewhat of an end goal)
Kafka for message brokering
Docker
Kubernetes
Ansible
We have a pretty basic proof of concept working, which seems to have ticked all the right boxes with the management team, and is an absolute joy to work with.
My next task is to investigate options for how the development workflow is actually going to work. They are already used to working in a CI/CD manner, with some of their newer products using Jenkins/Octopus Deploy.
My question is: Do any of you have any firm recommendations for setting up a CI/CD pipeline when deploying to a Kubernetes cluster?
A list of must haves is:
Multiple environments i.e. Integration, Test, UAT, Staging, Production.
A means through which different business units can uniquely handle deployments to different environments (development can only push to integration, tester into test, etc). This one is probably their biggest ask - they are used to working with Octopus, and they love the way it handles this.
The ability to roll back / deploy at the click of a button (or with as few steps as possible).
We would be deploying to our own servers initially.
I've spent the past couple of days looking in to options, of which there are many.
So far, Jenkins Pipeline seems like it could be a great start. Spinnakar also seems like a solid choice. I did read a bit into Fabric8, and while it offers much of what I'm asking, it seems a bit like overkill.
If you want to use Jenkins, Pipelines are indeed the way to go. Our setup does pretty much what you want, so let me explain how we set it up.
We use a Jenkins agent that has docker and kubectl installed. This agent first builds the docker container and pushes it to our docker registry. It will then call kubectl in various stages to deploy to our testing, acceptance and production clusters.
Different business units: in a Pipeline you can use an input step to ask whether the Pipeline should proceed or not. You can specify who may press the button, so this is how you could solve the deployment to different clusters. (Ideally, when you get to CD, people will realize that pressing the button several times per day is silly and they'll just automate the entire deployment.)
Rollback: we rely on Kubernetes's rollback system for this.
Credentials: we provision the different Kubernetes credentials using Ansible directly to this Jenkins agent.
To reduce code duplication, we introduced a shared Jenkins Pipeline library, so each (micro)service talks to all Kubernetes clusters in a standardized way.
Note that we use plain Jenkins, Docker and Kubernetes. There is likely tons of software to further ease this process, so let's leave that open for other answers.
We're working on an open source project called Jenkins X which is a proposed sub project of the Jenkins foundation aimed at automating CI/CD on Kubernetes using Jenkins pipelines and GitOps for promotion across environments.
If you want to see how to automate CI/CD with multiple environments on Kubernetes using GitOps for promotion between environments and Preview Environments on Pull Requests you might wanna check out my recent talk on Jenkins X at DevOxx UK where I do a live demo of this on GKE.

How do you run utility services on Heroku?

Heroku is fantastic for prototyping ideas and running simple web services, I often use it to run Python web services like Flask and Django and try out ideas. However I've always struggled to understand how you can use the infrastricture to run those amazingly powerful support or utility services every startup needs in its stack. 4 exmaples of services I can't live without and would recommend to any startup.
Jenkins
Statsd
Graphite
Graylog
How would you run these on Heroku? Would it be best just getting dedicated boxes (Rackspace, e.t.c) with these support services installed.
Has anyone one run utility deamons (services) on Heroku?
There are two basic options. The first is to find or create a Heroku addon to accomplish the task. For example, there are many hosted logging solutions you can use instead of Graylog; Rails on Fire or Travis can be used instead of Jenkins. If an appropriate addon doesn't exist, you can effectively make your own by just running the service on an AWS EC2 instance.
The other alternative is to push the service into being a 12factor application so that it can run on Heroku as well. For example, you could stub out whisper's filesystem calls so that they store in a backing service instead. This is often pretty painful and brittle, though, unless you can get your changes accepted by the upstream maintainers.
you could also use another free service in conjunction with it. OpenShift has a lot of Java related build services and tools that can be added.
I am using a mix of heroku, openshift, mongolab and my own web hosting. Throw in dropbox and box for some space...

What exactly is Heroku?

I just started learning Ruby on rails and I was wondering what Heroku really is? I know that its a cloud that helps us to avoid using servers? When do we actually use it?
Heroku is a cloud platform as a service. That means you do not have to worry about infrastructure; you just focus on your application.
In addition to what Jonny said, there are a few features of Heroku:
Instant Deployment with Git push - build of your application is performed by Heroku using your build scripts
Plenty of Add-on resources (applications, databases etc.)
Processes scaling - independent scaling for each component of your app without affecting functionality and performance
Isolation - each process (aka dyno) is completely isolated from each other
Full Logging and Visibility - easy access to all logging output from every component of your app and each process (dyno)
Heroku provides very well written tutorial which allows you to start in minutes. Also they provide first 750 computation hours free of charge which means you can have one processes (aka Dyno) at no cost. Also performance is very good e.g. simple web application written in node.js can handle around 60 - 70 requests per second.
Heroku competitors are:
OpenShift by Red Hat
Windows Azure
Amazon Web Services
Google App Engine
VMware
HP Cloud Services
Force.com
It's a cloud-based, scalable server solution that allows you to easily manage the deployment of your Rails (or other) applications provided you subscribe to a number of conventions (e.g. Postgres as the database, no writing to the filesystem).
Thus you can easily scale as your application grows by bettering your database and increasing the number of dynos (Rails instances) and workers.
It doesn't help you avoid using servers, you will need some understanding of server management to effectively debug problems with your platform/app combination. However, while it is comparatively expensive (i.e. per instance when compared to renting a slice on Slicehost or something), there is a free account and it's a rough trade off between whether it's more cost effective to pay someone to build your own solution or take the extra expense.
Heroku Basically provides with webspace to upload your app
If you are uploading a Rails app then you can follow this tutorial
https://github.com/mrkushjain/herokuapp
As I see it, it is a scalable administrated web hosting service, ready to grow in any sense so you don't have to worry about it.
It's not useful for a normal PHP web application, because there are plenty of web hosting services with ftp over there for a simple web without scalability needs, but if you need something bigger Heroku or something similar is what you need.
It is exposed as a service via a command line tool so you can write scripts to automate your deployments. Anyway it is pretty similar to other web hosting services with Git enabled, but Heroku makes it simpler.
That's its thing, to make the administration stuff simpler to you, so it saves you time. But I'm not sure, as I'm just starting with it!
A nice introduction of how it works in the official documentation is:
https://devcenter.heroku.com/articles/how-heroku-works
Per DZone: https://dzone.com/articles/heroku-or-amazon-web-services-which-is-best-for-your-startup
Heroku is a Platform as a Service (PaaS) product based on AWS, and is vastly different from Elastic Compute Cloud. It’s very important to differentiate ‘Infrastructure as a Service’ and ‘Platform as a Service’ solutions as we consider deploying and supporting our application using these two solutions.
Heroku is way simpler to use than AWS Elastic Compute Cloud. Perhaps it’s even too simple. But there’s a good reason for this simplicity. The Heroku platform equips us with a ready runtime environment and application servers. Plus, we benefit from seamless integration with various development instruments, a pre-installed operating system, and redundant servers.
Therefore, with Heroku, we don’t need to think about infrastructure management, unlike with AWS EC2. We only need to choose a subscription plan and change our plan when necessary.
That article does a good job explaining the differences between Heroku and AWS but it looks like you can choose other iaas (infrastructure) providers other than AWS. So ultimately Heroku seems to just simplify the process of using a cloud provider but at a cost.

Resources