docker-compose with similar images - performance

I currently have a Docker instance running on a PI3+ with the following images on separate containers:
lsioarmhf/sonarr
lsioarmhf/radarr
lsioarmhf/jacket
as these three image share a lot of common libraries (i.e. mono) I am wondering if there is a way to reduce their memory and CPU-usage footprint.
In order to do this I was looking at two possibilities:
1) building and mantaining my own image (based on the one by lsioarmhf on github) to include the three images
2) using docker compose
Can anyone please tell me if docker compose would reduce the memory footprint of the common elements of similar images?
Would it be the same of executing three separate containers?
Thanks,

No, docker-compose orchestrates your containers, it doesn't combine their runtime resources in any way. For simple setups it's virtually the same as you starting all 3 manually.
There is no way to do that with docker at all, actually. The images might share disk space but the runtime has to be different, because they're different instances.
Since it looks like you're using a PI3+ as a dedicated board for this project, you might be better off not using Docker at all. If you need it for another project, another microSD card is inexpensive enough to start from scratch, if you're worried about isolation.

Related

docker-compose dynamically see if container was started (or: docker-compose plan?)

Say I run a docker-compose script that starts n containers.
I want to dynamically see if the command started (or re-started) a specific container, as opposed to it already being up.
Is there a way to do this? Or alternately is there a way to ask docker-compose what it's going to do before doing it? (like terraform plan?)
The closest idea I had was to docker-compose ps right after docker-compose up and see the run time of the container, but it's a bit hacky.
Another hacky approach would be to parse the logs, which wouldn't be so bad except I didn't find a clean way to do it.
Thanks
If I was in your shoes I'd ask myself why this is necessary, and why this is necessary from a docker POV.
Neither docker or docker-compose is necessarily a release management software but more like a platform / infrastructure software. Arguably docker-compose could be considered a deployment software.
Onto your problem, I can't think any framework / library that could sit on top of docker / docker-compose and help you with this, neither can I find a good solution for you.
The "hacky" ways you suggest might be the best way of doing this (preferably the first). However, I'd still ask myself if this is really necessary? If so, it might be worth moving on to kubernetes and using something like helm, which gives you some measure of what you are looking for.

What is the best Azure VM for pdf generation?

Follow the link: https://learn.microsoft.com/en-us/azure/architecture/example-scenario/infrastructure/video-rendering
There are some VMs we can consider to generate pdf file.
My application's main purpose is that generating pdf as fast as possible. However, I do not know how pdf is generated and which resource this process cost most (CPU, GPU, Memory, Disk...).
Could you tell me which kind of VM should I choose?
Thank you.
For your purpose, the task is compute-intensive, graphics-intensive, and visualization workloads. So you need the GPU VM sizes for your VMs. There are six series VMs for you. And the difference between them is that they are based on different NVIDIA cards.
Users are able to visualize their graphics intensive workflows on the
NV instances to get superior graphics capability and additionally run
single precision workloads such as encoding and rendering.
So I think the NV-series virtual machines are most appropriate to you.

Is Go efficient in Cloud environment like Kubernetes?

Go has mechanical sympathy. So does that mean I need to modify my code based on the hardware I am running, for the best possible performance? How does that work in a cloud environment like K8s where a developer doesn't care about the hardware?
Go compiles across all relevant architectures. You do not have to modify your code for different platforms.
In cloud environments (like Kubernetes for example) you usually use docker images or drop in your binary.

Single Docker Image with multiple Softwares or Separate images for separate software

Need your expert advice before building my docker image. I have a requirement where I want to install multiple programming languages in a docker image. I have two options to proceed
(a) Install all the softwares together and build a single image which may be around 4GB.
(b) Install all the softwares separately and build a separate image for each software where I will have each image around 1GB.
Now question is if I want to use these images on a single machine to create multiple containers which will run in parallel then which option is better one, to have single image with bigger size or multiple images with smaller size?
Thanks in advance for your kind suggestions.
Regards
Mohtashim
According to the Docker best practice, you have to put one service per image. This will allow you to have more finegraned service control. Look here: https://docs.docker.com/engine/userguide/eng-image/dockerfile_best-practices/

Most effective Vagrant workflow, structure?

So I just started to work with Vagrant and wondering what is most effective structure or workflow. Should I create separate VM for every project or should I use 1 VM to many projects? For example I have 5 WP projects, very different, from landing page to Woocommerce should I separate it or put in one VM. I think putting many projects to 1 VM defeat the purpose of Vagrant in other hand, putting every project to separate VM is some king of overkill or this is normal practice?
Here is the visual example what I'm talking about:
VM per project
VM with many projects/vhosts
So which one is better? Or it's depends from situation and there is no correct answer?
Or it's depends from situation and there is no correct answer?
I think thats your correct answer !
The best is always to isolate each project from each other specially if they have no dependencies. If you have different php version etc, its best to isolate into a different VM.
on the other side, do you need to work on your 5 projects at the same time ? starting 5 VMs from your host is overkill and you'll run quickly into performance issue (unless you get 64+ GB RAM)

Resources