Multi docker containers on AWS Elastic Beanstalk, How to scale each container? - laravel

I plan to use ElasticBeanstalk docker runtime.
We plan to construct a multi container that launches Laravel application container and Laravel queue worker container.
There is no problem deploying them.
However, there is a doubt about how they scales.
For example, if only the load of the Laravel application container increases, only that container increases and the queue worker container does not increase?
Or does the two containers increase in the same way in any case?
I want the former to be like.
If you have knowledge, tell me.
Thanks for reading.

Related

Add an EC2 instance as a worker to Amazon MWAA

I am currently using Amazon MWAA as my Airflow. I want to have 2 types of workers nodes but currently MWAA doesn't support it. I want to have:
High Compute Optimized CPU workers
GPU workers
I want to create different queues for both the worker types and submit jobs to these workers nodes.
Is it possible to add an existing EC2 instance (say GPU instance) to MWAA? I only see Start and Stop EC2 operators available.
Does anyone have any pointers on this?
If you have an EKS then its possible to define a GPU pool and using KubernetesPodOperator you can run a docker under a gpu pool.
Another solution its ECS (easier to define). you can see a good example to run gpu in Airflow in this article

Intermittently slow network inside Docker container in Service Fabric

I'm running a few Windows IIS containers on a Service Fabric cluster. Occasionally, especially after high load, the outbound connections from inside the containers becomes very slow and causes timeouts. Even after restarting the containers, the issue is not fixed. I even tried to run a container explicitly using docker run inside the node as opposed to using a SF deployment, and the new container also has this slow network. What resolves it is to restart Fabric.exe process on the node. This issue is random in that it affects only one node at a given time.
Any ideas what could be causing this?

IIS in Docker on Kubernetes: in-flight requests

Currently, I'm running an application using VMs behind a load balancer. When I need to roll out a new version, a new set of VMs are spun up and the version gets deployed to them. The old VMs are given time to complete in-flight requests - 10 minutes in my case.
I'd like to move the whole project to Docker and Kubernetes using Microsoft's standard images for aspnet but I cannot find how in-flight requests are handled with this stack.

Docker Container Dispatcher

I want to instantiate different containers from the same image to serve different requests and process different data.
Once a request is received by Docker, it has to instantiate a container (C1) from image (I), to work on a related dataset file D1.
For the second request, Docker has to instantiate a container (C2) from image (I) as well, to work on dataset file D2.
And so on ...
Does Docker have a built-in facility to orchestrate this kind of business or I have to write my own service to receive requests and start the corresponding containers to serve them?
Kindly provide your guidance on what is the best way to do so.
I think what you're looking for is a serverless / function as a service framework. AFAIK Docker doesn't have anything like this built in. Try to take a look at OpenFaaS or Kubeless. Both are frameworks that should help you get started with implementing your case.

monitoring memory usage in docker

I am running 1000 docker containers sequentially. Each container instance runs a specific job. After the execution of the job, I kill the container to release resources and run another job within another instance and so on.
I would like to get the memory usage of each container. One value per container expressing the average memory usage.
How is it possible?
May be with prometheus, but I don't know how to use it
May be with prometheus, but I don't know how to use it
Prometheus and Cadvisor is one possibility, I gave a talk earlier in the week with an example doing this using Swarm. See http://www.slideshare.net/brianbrazil/prometheus-and-docker-docker-galway-november-2015
The metrics you're interested in will then be available with the cadvisor job label.
As per your statement, you are running multiple containers and want to scrape each containers memory usage. This can be done with prom query.
Container_memory_in_bytes will give you the complete information about how much memory is used by each container in real time.
Please find the link for setting up the complete monitoring infrastructure. enter link description here
With Promquery you can achieve this.
Hope this answered your question.

Resources