Connecting logging events between docker containers - spring-boot

For the first time I am working in an environment where we deploy components in multiple docker containers. The log files in each component are indexed by splunk. In our example we have a legacy JSP app running in container A that invokes restful API deployed in container B (spring-boot). The issue I am running into is connecting the user request in container A to the restful API in container B.
Basically I want a cohesive sequential view of all the code that was executed to process a user's request. I think I will have to log using some type of unique identifier that is attached to a user but shared across all containers. Is there some established pattern to do this?

Related

Creating a Simple Hello World app in Kubernetes

Most software tech has a "Hello World" type example to get started on. With Kubernetes this seems to be lacking.
My scenario cannot be simpler. I have a simple hello world app made with Spring-Boot with one Rest controller that just returns: "Hello Hello!"
After I create my docker file, I build an image like this :
docker build -t helloworld:1.0 .
Then I run it in a container like this :
docker run -p 8080:8080 helloworld:1.0
If I open up a browser now, I can access my application here :
http://localhost:8080/hello/
and it returns :
"Hello Hello!"
Great! So far so good.
Next I tag it (my docker-hub is called ollyw123, and the ID of my image is 776...)
docker tag 7769f3792278 ollyw123/helloworld:firsttry
and push :
docker push ollyw123/helloworld
If I log into Docker-Hub I will see
Now I want to connect this to Kubernetes. This is where I have plunged deep into the a state of confusion.
My thinking is, I need to create a cluster. Somehow I need to connect this cluster to my image, and as I understand, I just need to use the URL of the image to connect to (ie.
https://hub.docker.com/repository/docker/ollyw123/helloworld)
Next I would have to create a service. This service would then be able to expose my "Hello World!" rest call through some port. This is my logical thinking, and for me this would seem like a very simple thing to do, but the tutorials and documentation on Kubernetes is a mine field of confusion and dead ends.
Following on from the spring-boot kubernetes tutorial (https://spring.io/guides/gs/spring-boot-kubernetes/) I have to create a deployment object, and then a service object, and then I have to "apply" it :
kubectl create deployment hello-world-dep --image=ollyw123/helloworld --dry-run -o=yaml > deployment.yaml
kubectl create service clusterip hello-world-dep --tcp=8080:8080 --dry-run -o=yaml >> deployment.yaml
kubectl apply -f deployment.yaml
OK. Now I see a service :
But now what???
How do I push this to the cloud? (eg. gcloud) Do I need to create a cluster first, or is this already a cluster?
What should my next step be?
There are a couple of concepts that we need to go through regarding your question.
The first would be about the "Hello World" app in Kubernetes. Even this existing (as mentioned by Limido in the comments [link]), the app itself is not a Kubernetes app, but an app created in the language of your choice, which was containerized and it is deployed in Kubernetes.
So I would call it (in your case) a Dockerized SpringBoot HelloWorld app.
Okay, now that we have a container we could simply deploy it running docker, but what if your container dies, or you need to scale it up and down, manage volumes, network traffic and a bunch of other things, this starts to become complicated (imagine a real life scenario, with hundreds or even thousands of containers running at the same time). That's exactly where the Container Orchestration comes into place.
Kubernetes helps you managing this complexity, in a single place.
The third concept that I'd like to talk, is the create and apply commands. You can definitely find a more detailed explanation in here, but both of then can be used to create the resource in Kubernetes.
In your case, the create command is not creating the resources, because you are using the --dry-run and adding the output to your deployment file, which you apply later on, but the following command would also create your resource:
kubectl create deployment hello-world-dep --image=ollyw123/helloworld
kubectl create service clusterip hello-world-dep --tcp=8080:8080
Note that even this working, if you need to share this deployment, or commit it in a repository you would need to get it:
kubectl get deployment hello-world-dep -o yaml > your-file.yaml
So having the definition file is really helpful and recommended.
Great... Going further...
When you have a deployment you will also have a number of replicas that is expected to be running (even when you don't define it - the default value is 1). In your case your deployment is managing one pod.
If you run:
kubectl get pods -o wide
You will get your pod hello-world-dep-hash and an IP address. This IP is the IP of your container and you can access your application using it, but as pods are ephemeral, if your pod dies, Kubernetes will create a new one for you (automatically) with a new IP address, so if you have for instance a backend and its IP is constantly changing, you would need to manage this change in the frontend every time you have a new backend pod.
To solve that, Kubernetes has the Service, which will expose the deployment in a persistent way. So if your pod dies and a new one comes back, the address of your service will continue the same, and all the traffic will be automatically routed to your new pod.
When you have more than one replica of your deployment, the service also load balance the load across all the available pods.
Last but not least, your question!
You have asked, now what?
So basically, once you have your application containerized, you can deploy it almost anywhere. There are N different places you can get it. In your case you are running it locally, but you could get your deployment.yaml file and deploy your application in GKE, AKS, EKS, just to quote the biggest ones, but all cloud providers have some type of Kubernetes service available, where you can spin up a cluster and start playing around.
Actually, to play around I'd recommend Katakoda, as they have scenarios for free, and you can use the cluster to play around.
Wow... That was a long answer...
Just to finish, I'd recommend the Network Introduction in Katakoda, as there are different types of Services, depending on your scenario or what you need, and the tutorial is goes through the different types in a hands-on approach.
In the context of Kubernetes, Cluster is the environment where your PODS and Services are running. Think of it like a VM environment where you setup your Web Server and etc.. (although I don't like my own analogy)
If you want to run the same thing in GCloud, then you create a Kubernetes cluster there and all you need to do is to apply your YAML files that contains the Service and Deployment there via the CLI that Google Cloud provides to interact with your Cluster.
In order to interact with GCloud GKS Cluster via your local command prompt, you need to get the credentials for that cluster. This official GCloud document explain how to retrieve your cluster credential. once done, you can start interacting with the Kubernetes instance running in GCloud via kubectl command using your command prompt.
The service that you have is of type clusterIP which is only accessible from within the kubernetes cluster. You need to either use NodePort or LoadBalanacer type service or ingress to expose the application outside the remote kubernetes cluster(a set of VMs or bare metal servers in public or private cloud environment with kubernetes deployed on them) or local minikube/docker desktop. Once you do that you should be able to access it using a browser or curl

Run several microservices docker image together on local dev with Minikube

I have several microservices around 20 or something to check their services in my local development. The micro-services are spring boot services with maven build. So wanted to know when I have to run them on my aws server can I run all these containers individually like they might have shared database so will that be one issue i might face.Or is it possible to run all these services together in one single docker image.
Also I have to configure it with Kubernetes so I have configured Minikube in my local dev would be helpful if there are some considerations to be taken while running around 20services on my minikube or even Kubernetes env
PS: I know this is a basic question but dont have much idea about Devops
Ideally you should have different docker image for each of the micro services and create kubernetes deployment for each of the micro services.This makes scaling individual micro services de coupled from each other. Also communication between micro services should be via kubernetes service. This makes communication stable because service IPs and FQDN don't change even if pods are created, deleted, scaled up and down.
Just be cautious of how much memory and CPU the micros services will need and if the system with minikube has that much resource or not. If the available memory and CPU of a Kubernetes node is not enough to schedule the pod then pods will be stuck in pending state.
As you have too many microservices, I suggest you make a Kubernetes cluster on AWS of 3-4 VMs (more info here). Then try to deploy all your microservices on that. For that you need to build the containers individually for each service and create kubernetes deployment for each service.
I run all these containers individually like they might have shared database so will that be one issue i might face.
As you have shared database, I suggest you run your database server on individual host and then remotely connect with your database from your services. This way you would be able to share database between your microservices.

How to configure Application Logging Service for SCP application

I have created the hello world application from the SAP Cloud SDK archetypes and pushed this to the cloud foundry environment, binding it to an application logging service instance. My understanding is that this should already provide me with the ability to analyze all logs in the Kibana dashboard of the cloud platform and previously it also worked this way.
However, this time the Kibana dashboard remains empty, so I am wondering if I missed a step or configuration. Looking at the documentation of the service and the respective tutorial blog, I was not able to identify any additional required steps. In the Logs view on the SCP cockpit I can definitely see the entries, but they are not replicated to the ELK stack in the background.
Problem was not SDK related, but seems to have been an incident on the SCP - now works correctly without any changes.

laravel queue service in docker container

I've 3 docker containers, php7 nginx and mariadb each are linked up and serve simple wordpress sites.
I'd like to add laravel project to the bunch. It all works great except laravel services that I need to run, e.g. queue listener and scheduler cron. How do you recommend dealing with these?
You might want to consider using Docker Compose to orchestrate multiple containers together. For example, you'd have a Docker Compose file that declared a Docker network, and three containers:
Message Queue
Cron Scheduled Tasks
Laravel application + PHP + Web Server
As long as you add each of the containers to the same network, they'll be able to communicate with each other. Another benefit of using Docker Compose is that scaling containers is much easier.
Here's the reference documentation for Docker Compose YAML files: https://docs.docker.com/compose/compose-file/

How to cluster a Grails 2.3.6 app's session with embedded Tomcat?

I'm deploying my Grails (2.3.6) app with the Grails Standalone App Runner plugin, like so:
grails -Dgrails.env=prod build-standalone myapp.jar --tomcat
Then, my CI build places myapp.jar onto my app server, say, myapp01.
I now want to cluster app sessions when myapp is running on multiple nodes. So if myapp gets deployed to myapp01, myapp02 and myapp03, and one of those instances starts a new session with a user, I want all 3 to be aware of the same session. This is obviously so I can put all the nodes behind a load balanced URL (http://myapp.example.com, etc.) and it doesn't matter what node you get routed to: all nodes share the same sessions.
I googled "grails session clustering" and see a bunch of articles that seem to require terracotta, but I also heard that Grails has built-in session clustering facilities. But any searches I do come back empty-handed.
So I ask: How can I achieve this kind of session clustering with an embedded Tomcat?
Besides the seesion-cookie plugin that #injecteer proposed, there are several other plugins allowing to keep sessions in a shared storage (DB, mongodb, redis, memcached) that can be accessed by any of your tomcat instances. Take a look at these:
http://grails.org/plugin/database-session
http://grails.org/plugin/mongodb-session
http://grails.org/plugin/redis-database-session
http://grails.org/plugin/standalone-tomcat-redis
http://grails.org/plugin/standalone-tomcat-memcached
I never heard of something like this out-of-box. I would give 2 options a try:
Use a session-cookie plugin, with which you decouple your clients from storing the sessions in tomcat
Use or implement persistent sessions, which are stored in some sort of DB and are not bound to any tomcat instance.
You could achieve this by using the tomcat build-in functionality. Tomcat instance node could replicate session from others, then all the session get shared between nodes.
You can do this in at least three ways:
Session Replication by using Muilcast between instance nodes.
Session Replication just between primary and secondary node backup.
Session Replication between Static Memberships, this one is useful when the multicast cannot be enabled or supported such as in AWS EC2 Env.
Reference:
http://tomcat.apache.org/tomcat-7.0-doc/cluster-howto.html
http://khaidoan.wikidot.com/tomcat-cluster-session-replication

Resources