How can micro services do with Docker Compose and swarm mode? - spring-boot

Need help with this MCQ. Please explain the answer if possible
How can microservices do with Docker Compose and swarm mode?
A. Construct and define multi container application
B. Provide analysis reports of container orchestration performance
C. Collect data on speed and efficiency and recommend alterations
D. None of the options

Answer: None of the given options (Assuming the question is about What rather than How)
Reason:
All the options given in A-C are handled by a container-orchestration platform, be it Kubernetes, Docker Swarm Azure container service or any other platform of your choice. Microservice applications can at most spin up a docker container but they can't really decide by themselves stuffs like, how many containers to spin up, when to stop some of them etc. cause the main aim of the application is not manage the container but to address business needs.

Related

Using Kuma to run a multi-cloud service mesh

How can I use Kuma to run a multi-cloud service mesh that spans across a VM-based environment as well as a Kubernetes-based environment?
Specifically, how will service discovery work in such a way that VM-based workloads can discover K8s-based ones and vice-versa?
Kuma defines the so-called zone as a domain of control isolation, i.e. all workload connections are managed by a single control plane. Such a control plane is called remote. The overall view and policy management is done in a global control plane, which unifies all zones.
When one starts planning a distributed deployment, they have to enlist the following items:
Where the Global control plane will be deployed and its type. The latter can be either Universal (VM/BareMetal/Container) or Kubernetes(on-premise/cloud).
Number and type of zones to add. These can be changed over time.
Follow the instructions to install the global control plane following the steps specific for the chose type of deployment. Gather the relevant IP address/ports as described.
Installing remote control plane is fairly trivial. This process can be repeated as needed during the lifetime of the whole multi-zone deployment.
Cross-zone service consumption is described in brief here. In short, we do recommend using the following syntax to access a service echo-server, deployed in a Kubernetes namespace echo-example and exposed on port 1010:
<kuma-enabled-pod>$ curl http://echo-server_echo-example_svc_1010.mesh
Using this syntax, the service can be found and consumed even from a neighbouring Universal zone where the workload runs in a VM. Kuma leverages its own DNS service, that allows for this service discovery.
It is recommended that service declared in VMs follow the same service naming format so that if needed to have a service replica in a Kubernetes cluster, they can be easily interchanged without the need to reconfigure the whole infrastructure.

Microservices in practice

I have studied concept of microservices for a good while now, and understand what they are are and why they are necessary.
Quick refresher
In a nutshell, monolith application is decomposed into independent deployable units, each of which typically exposes it's own web API and has it's own database. Each service fulfills a single responsibility and does it well. These services communicates over synchronous web services such as REST or SOAP, or using asynchronous messaging such as JMS to fulfill some request in synergy. Our monolith application has became a distributed system. Typically all these fine grained APIs are made available through an API gateway or proxy, which acts as an single-point-of-entry facade, performing security and monitoring related tasks.
Main reasons to adapt microservices is high availability, zero downtime update and high performance achieved via horizontal scaling of a particular service, and looser coupling in the system, meaning easier maintenance. Also, IDE functionality, build and deployment process will be significantly faster, and it's easier to change framework or even the language.
Microservices goes hand in hand with clustering and containerization technologies, such as Docker. Each microservice could be packed as a docker container to run it in any platform. Principal concepts of clustering are service discovery, replication, load balancing and fault tolerance. Docker Swarm is a clustering tool which orchestrates these containerized services, glues them together, and handles all those tasks under the hood in a declarative manner, maintaining the desired state of the cluster.
Sounds easy and simple in theory, but I still don't understand how to implement this in practice, even I know Docker Swarm pretty well. Let's view an concrete example.
Here is the question
I'm building a simplistic java application with Spring Boot, backed by MySQL database. I want to build a system, where user gets a webpage from Service A and submits a form. Service A will do some manipulation to data and sends it to Service B, which will further manipulate data, write to database, return something and in the end some response is sent back to user.
Now the problem is, Service A doesn't know where to find Service B, nor Service B know where to find database (because they could be deployed at any node in the cluster), so I don't know how I should configure the Spring boot application. First thing to come in my mind is to use DNS, but I can't find tutorials how to setup such a system in docker swarm. What is the correct way to configure connection parameters in Spring for distributed cloud deployment? I have researched about Spring Cloud project, but don't understand if it's the key for this dilemma.
I'm also confused how databases should be deployed. Should they live in the cluster, deployed alongside with the service (possibly with aid of docker compose), or is it better to manage them in more traditional way with fixed IP's?
Last question is about load balancing. I'm confused if there should be multiple load balancers for each service, or just a single master load balancer. Should the load balancer has a static IP mapped to a domain name, and all user requests target this load balancer? What if load balancer fails, doesn't it make all the effort to scale the services pointless? Is it even necessary to setup a load balancer with Docker Swarm, as it has it's own routing mesh? Which node end user should target then?
If you're looking at using a Docker Swarm you don't need to worry about the DNS configurations as it's already handled by the overlay network. Let's say you have three services:
A
B
C
A is your DB, B might be the first service to collect data, and C to recieve that data and update the database (A)
docker network create \
--driver overlay \
--subnet 10.9.9.0/24 \
youroverlaynetwork
docker service create --network youroverlaynetwork --name A
docker service create --network youroverlaynetwork --name B
docker service create --network youroverlaynetwork --name C
Once all the services are created they can refer to each other directly by name
These requests are load balanced against all replicas of the container on that overlay network. So A can always get an IP for B by referencing "http://b" or just by calling hostname B.
When you're dealing with load balancing in Docker, a swarm service is already load balanced internally. Once you've defined a service to listen on port 8018, all swarm hosts, will listen on port 8018 and mesh route that to a container in round robin fashion.
It is still, however, best practice to have an application load balancer sit in front of the hosts in the event of host failure.

EC2 Container Service vs Apache Mesos

We are looking to use Docker container to run our batch jobs in a cluster enviroment.
We are evaluating to use AWS ECS Container Service/Chronos/Mesos.
As far as I know, Apache Mesos has some overlapping features/purpose that EC2 has, like cluster management. Chronos is a distributed scheduler.
I am having dificult to correlate all this technologies to create a architecture!
EC2 service replace Mesos? What about the scheduler?
We are a small team will little experience in cluster development. Which stack is better for our batch processing?
EDIT
I make a huge edit, and i think now i understand the architecture:
This is a sample picture with two cluster been managed by Mesos.
Reading the ECS Container Service documentation(http://docs.aws.amazon.com/AmazonECS/latest/developerguide/scheduling_tasks.html), AWS is on the way of integrete ECS with the Mesos Apache Framework. So I imagine that using in the future, we can use the mesos framework to manage the resources in the ECS Cluster. So it is going to be possible to use Chronos (for batch scheduling) and Marathon (for long running app.)
EDIT
At this moment, we dont have distributed jobs running, like hadoop jobs or sparks jobs. Our job are much simpler, running on single instances of EC2. We are planning to use Docker to run our batch running jobs.
I'd argue it depends on the type of batch jobs, but the Apache Mesos ecosystem is certainly more flexible than ECS to accomodate your needs. The flexibility comes from the fact that Mesos uses a so called two-level scheduling model, which is a fancy name for it outsources the scheduling decision into frameworks (rather than trying to implement each and every existing and future workload scheduling strategy in its core, itself).
You mentioned one such a framework already, Chronos, which is a good working horse, just maybe don't use the dependencies for jobs, ok? Then there is another great batch framework called Cook. Depending on your needs (for example SQL-based batch report generation) you could use Apache Spark. And so on and so forth.
BTW, did I mention already that with Mesos you don't risk a vendor lock-in, while being able to deploy it, depending on your needs, fully in one cloud (such as AWS), hybrid cloud (say AWS and GCP/Azure) or on-premises?
UPDATE: to clarify, of course Mesos has first-class Docker support.

Difference between containers (Docker) and IIS

I am learning about containers (mostly Docker) since it is coming to windows. And the benefits seem very similar to IIS.
I work behind a firewall building apps for my company (Line of Business). We have a bunch of VMs that will each host a family of web services. One VM can have 20+ services running on IIS.
In that scenario what does deploying my services via Docker get me that I don't already get using IIS?
NOTE: I am completely new to Docker and only have developer level experience in IIS.
Docker is not a replacement for IIS - it can run an application like IIS within a container (I assume - not sure how this is going to work on Windows).
Docker is more like a replacement for a VM - the biggest difference between a VM and a Docker container is that the Docker container is MUCH lighter than a full VM. The usual claim that you see is that you can run many more Docker containers on a host than VMs (but your mileage may vary - some of the claims out there are bit... overstated).
Basically, the idea is this: a VM is a full virtual machine - a real OS running on top of virtual hardware (that looks real enough to the OS). Thus, you're going to get all the bells & whistles of an OS, including stuff you probably don't need if you're running IIS or another HTTP server.
Docker, on the other hand, just uses the host's OS but employs some useful features of the OS to isolate the processes running in the container from the rest of the host. So you get the isolation of a VM (useful in case something fails or for security) without the overhead of a whole OS.
Now you could run "20+ services" in a single Docker container, but that's not generally recommended. Because Docker containers are so lightweight, you can (and should!) limit them to one service per container. This gives you benefits such as
separation of concerns: your database container is just that - a database. Nothing else. And furthermore, it only handles the data for the application that's using it.
improved security: if you want to set it up this way, your database container can only be accessed from the application that's using that database.
limited stuff installed: your database container should be running MySQL only - no SSH daemon, no web server, no other stuff. Simple & clean, with each container doing exactly one thing.
portability: I can configure my images, pull them to a new host, and start up the container and I will be guaranteed to have the exact same environment on the new host that I have on the old one. This is extremely useful for development.
That's not to say you couldn't set something similar up using VMs - you certainly could - but imagine the overhead of a full VM for each component in your application.
As an example, my major project these days is a web application running on Apache with a MySQL database, a redis server, and three microservices (each a simple independent web application running on Lighttpd). Imagine running six different servers for this application.
Docker containers add support for .NET, SQL Server, and other workloads that would integrate with IIS. You also benefit from docker portability, as you could take your container images and run them on AWS or Azure,as well as privately. And, you get access to a large ecosytem of docker based tools . . . bottomline, the industry is moving to support the Docker API.
To host a web application on IIS in a container, a good starting point is using the latest IIS docker image for applications to be hosted on IIS, or if ASP.NET or WCF is the target platform, using the relevant images from these two platforms, which in turn include IIS:
https://hub.docker.com/r/microsoft/iis/
https://hub.docker.com/r/microsoft/aspnet/
https://hub.docker.com/r/microsoft/wcf/

Docker-Swarm, Kubernetes, Mesos & Core-OS Fleet

I am relatively new to all these, but I'm having troubles getting a clear picture among the listed technologies.
Though, all of these try to solve different problems, but do have things in common too. I would like to understand what are the things that are common and what is different. It is likely that the combination of few would be great fit, if so what are they?
I am listing a few of them along with questions, but it would be great if someone lists all of them in detail and answers the questions.
Kubernetes vs Mesos:
This link
What's the difference between Apache's Mesos and Google's Kubernetes
provides a good insight into the differences, but I'm unable to understand as to why Kubernetes should run on top of Mesos. Is it more to do with coming together of two opensource solutions?
Kubernetes vs Core-OS Fleet:
If I use kubernetes, is fleet required?
How does Docker-Swarm fit into all the above?
Disclosure: I'm a lead engineer on Kubernetes
I think that Mesos and Kubernetes are largely aimed at solving similar problems of running clustered applications, they have different histories and different approaches to solving the problem.
Mesos focuses its energy on very generic scheduling, and plugging in multiple different schedulers. This means that it enables systems like Hadoop and Marathon to co-exist in the same scheduling environment. Mesos is less focused on running containers. Mesos existed prior to widespread interest in containers and has been re-factored in parts to support containers.
In contrast, Kubernetes was designed from the ground up to be an environment for building distributed applications from containers. It includes primitives for replication and service discovery as core primitives, where-as such things are added via frameworks in Mesos. The primary goal of Kubernetes is a system for building, running and managing distributed systems.
Fleet is a lower-level task distributor. It is useful for bootstrapping a cluster system, for example CoreOS uses it to distribute the kubernetes agents and binaries out to the machines in a cluster in order to turn-up a kubernetes cluster. It is not really intended to solve the same distributed application development problems, think of it more like systemd/init.d/upstart for your cluster. It's not required if you run kubernetes, you can use other tools (e.g. Salt, Puppet, Ansible, Chef, ...) to accomplish the same binary distribution.
Swarm is an effort by Docker to extend the existing Docker API to make a cluster of machines look like a single Docker API. Fundamentally, our experience at Google and elsewhere indicates that the node API is insufficient for a cluster API. You can see a bunch of discussion on this here: https://github.com/docker/docker/pull/8859 and here: https://github.com/docker/docker/issues/8781
Join us on IRC # #google-containers if you want to talk more.
I think the simplest answer is that there is no simple answer. The swift rise to power of containers, and Docker in particular has left a power vacuum for "container scheduling and orchestration", whatever that might mean. In reality, that means you have a number of technologies that can work in harmony on some levels, but with certain aspects in competition. For example, Kubernetes can be used as a one stop shop for deploying and managing containers on a compute cluster (as Google originally designed it), but could also sit atop Fleet, making use of the resilience tier that Fleet provides on CoreOS.
As this Google vid states Kubernetes is not a complete out the box container scaling solution, but is a good statement to start from. In the same way, you would at some stage expect Apache Mesos to be able to work with Kubernetes, but not with Marathon, in as much as Marathon appears to fulfil the same role as Kubernetes. Somewhere I think I've read these could become part of the same effort, but I could be wrong about that - it's really about the strategic direction of Mesosphere and the corresponding adoption of Kubernetes principles.
In the DockerCon keynote, Solomon Hykes suggested Swarm would be a tier that could provide a common interface onto the many orchestration and scheduling frameworks. From what I can see, Swarm is designed to provide a smooth Docker deployment workflow, working with some existing container workflow frameworks such as Deis, but flexible enough to yield to "heavyweight" deployment and resource management such as Mesos.
Hope this helps - this could be an enormous post. I think the key is that these are young, evolving services that will likely merge and become interoperable, but we need to ride out the next 12 months to see how it plays out. There's some very clever people on the problem, so the future looks very bright.
As far as I understand it:
Mesos, Kubernetes and Fleet are all trying to solve a very similar problem. The idea is that you abstract away all your hardware from developers and the 'cluster management tool' sorts it all out for you. Then all you need to do is give a container to the cluster, give it some info (keep it running permanently, scale up if X happens etc) and the cluster manager will make it happen.
With Mesos, it does all the cluster management for you, but it doesn't include the scheduler. The scheduler is the bit that says, ok this process needs 2 procs and 512MB RAM, and I have a machine over there with that free, so I'll run it on that machine. There are some plugin schedulers available for Mesos: Marathon and Chronos and you can write your own. This gives you a lot of power of resource distribution and cluster scaling etc.
Fleet and Kubernetes seem to abstract away those sorts of details (so you don't have to write your own scheduler basically). This means you have to define your tasks and submit them in the format/manner defined by Fleet or Kubernetes and then they take over and schedule the tasks (containers) for you.
So I guess: Using Mesos may mean a bit more work in writing your own scheduler, but potentially provides more flexibility if required.
I think the idea of running Kubernetes on top of Mesos is that Kubernetes acts as the scheduler for Mesos. Personally I'm not sure what benefits this brings over running one or the other on its own though (hopefully someone will jump in and explain!)
As MikeB said.. it's early days, and it's all up for grabs (keep an eye on Amazon's ECS as well) so there are many competing standards and a lot of overlap!
-edit- I didn't mention Docker swarm as I don't really have much experience with it.
For anyone coming to this after 2017 fleet is deprecated. Do not use it anymore.
Fleet docs say "fleet is no longer actively developed or maintained by CoreOS" and link to Container orchestration: Moving from fleet to Kubernetes. Fleet was removed from Container Linux (formerly known as CoreOS Linux) and replaced with Kubernetes kubelet (agent). This coincided with a corporate pivot to offer Tectonic (a Kubernetes distro) as their primary product.

Resources