Docker ubuntu image with web server installed explicitly vs docker web server image? - image

What is the difference between Docker os image with web server installed with a web server and Docker webserver image ?
For eg Docker image of Ubuntu-16.04 running as container with NginX installed and other container running Nginx as Nginx Docker image?
Whose Performance will be better and stable ?

Usually a container with nginx runs in alpine os . a very lightweight os. While in the other hand you have ubuntu os and nginx.
So , the difference? ... the OS.

If you have good Docker/Unix/shell-scripting skills, a continuous-integration (CI) system, and the willingness to do ongoing maintenance, you might prefer building your own images. You will be in control of the exact version of the software used, and any build options or extensions required, and you will control when it gets security patches. But, this is a harder path to get started with, and if you don't periodically update your custom images they'll never get any sort of bug fixes or security patches at all.
If you're new to this space, you might prefer standard Docker Hub images. They're pre-packaged, usually have "enough" customization options, and are generally fairly good quality. But, if you need some extra customization, you might wind up needing to build a custom image anyways. I've also run into a situation where I've pinned an image to a specific upstream version image:1.2.3, and noticed several months later that image:1.2.7 is out, and the six-month-old Docker Hub image hasn't gotten a critical security fix because it's not getting built any more.
If none of this especially concerns you (and if you don't have a DevOps team at your disposal), I'd suggest just using the prebuilt nginx image and focusing on building and deploying your actual application.

Related

Converting Online Express App to an Offline App for distribution

I have been trying to convert my express app with a backend that requires specific dependencies that users need to access to be an offline app on Windows, which is currently distributed online by building a docker image out of it and deploying it to Google Cloud Run.
My plan was to use electron and let the installer force the users to install Docker Desktop since the application requires dependencies that are only available in Ubuntu (and in this way, I only need to use my Docker image to distribute it).
I know that it is a bad idea to let users install Docker. And I am here, after researching for a week, to sincerely ask for your help on what would be the right direction to convert it offline.
Do I really need to refactor my application to be Windows compatible, or is there a way where I can use my existing implementation without using Docker? **
*Edit: Specified that my app has a backend that requires specific dependencies that users need to access. Credits to #super.

Bundle Docker image as an executable application for major platforms / Can we run docker images without docker?

I would like to build an application as an executable that anyone could start without any requirements and without GUI.
Do you know if we can bundle a Docker image as an executable app for Mac OS or Windows (and Android/iOS) ?
Another way to phrase it, can we run docker images without docker installed ? can we bundle a docker image and docker inside an app and when executed it starts a docker container with the embedded docker ?
Docker is just a set of linux features (windows containers use similar windows features), so as long as you pack what ever you need to set everything just like docker (or any other container runtime like podman) does for you it will probably work. Just notice if using a docker image you'd need to unpack its files and do everything the docker engine and CLI are doing for you.
It depends on the application type but you could use a PWA which runs as a native desktop application on computers and smartphones and has similar functionality while using general purpose web frameworks like React, Angular or Vue.
If you want to run an executable regardless of the operating system or architechture docker's your best bet, and if being light weight and daemonless is your thing consider using podman as your application dependency and running your application
with it or its likes

Run Linux containers in an Azure DevOps Windows hosted build agent

I'm using Azure DevOps, and have a Linux build pipeline (ubuntu-16.04) setup that builds the code, starts containers with Docker Compose, then runs integration tests using the containers. This all works great.
Now I want to setup a Windows build pipline that does the same thing. However, with both the windows-2019 and win-1803 images, when I do docker stack up, I get this error message:
image operating system "linux" cannot be used on this platform
So, I guess Docker is installed in Windows mode, and thought to switch it to Linux containers using:
DockerCli.exe" -SwitchLinuxEngine
or
"%ProgramFiles%\Docker\Docker\DockerCli.exe" -SwitchLinuxEngine
However, the DockerCli.exe executable doesn't seem to be installed at all.
The only 2 things I can think of are:
Setup a self-hosted build agent
Somehow start the required containers somewhere else
But both of these will be a lot of work to setup, which I really don't need, neither do I want the running costs, or the job of maintaining it.
Are there any workarounds to run Linux containers on hosted Windows build agents?
Run Linux containers in an Azure DevOps Windows hosted build agent
Firstly, see the images listed which installed on Windows hosted agent: Docker images in Windows hosted agent. Docker EE on Server does not support Linux containers at all. So, it is impossible to build Linux docker image on Hosted Win-1803 agent. It can only build Windows docker image.
Until now, the only two workarounds is using self-hosted agent which based on Windows machine, or run a build which has two separate agent jobs(Pass the build artifacts back and forth between one agent job which run on Hosted Linux agent and the other is running on Hosted Windows agent) .
But since these two workarounds are all not convenient for you, there will not any other work around can achieve what you want.
In addition, there has a such suggestion feature raised on our official forum: Support for Docker with Linux Containers on Windows (LCOW) on hosted agent pool . You could vote and comment there, our Product Group team will review these suggestions regularly and consider taking it as Developer Roadmap. If this feature can be true, I think it will be very convenient to build Linux Container and without considering about which agent can only support.

What images should I be using for Windows on Windows with Docker EE on Windows 2016

I have been developing and maintaining some windows on windows Docker containers that run ASP.NET core applications on Windows 2016 (using Docker EE) for some time now. I was planning on turning over all ongoing updates/maintenance to server administrators, but I have hit a problem. When I started I believe I was using SAC builds, but now none of the SAC (or LTS for that matter) builds pull on Windows 2016, and though I have spent a good deal of time googling, this whole thing seems to be a big cluster. With docker on Linux, I would just use any LTS distro and apply updates when building the container. Does Microsoft have a clear plan on doing the same? It seems like they are missing the point of docker. I want to run a windows on windows container in windows server 2016, and I want to make sure when I recreate it that I am getting the latest security updates.
https://devblogs.microsoft.com/dotnet/net-core-container-images-now-published-to-microsoft-container-registry/
This page talks about the big changes made in Docker images recently and specifically says the following:
.NET Core images for Nano Server 2016 are still available on Docker Hub and MCR and will not be deleted. You can continue to use them but they are not supported and will not get new updates. If you need to do this and previously used manifest tags, like 1.1-sdk, you can now use the following MCR tags (Docker Hub variants are similar)
Does this mean the new tags listed get updates? I would assume they would tag it with LTS instead of SAC2016 to better convey the notion that they are continuing to update.
This page seems to be really helpful, but none of the images listed pull on windows server 2016:
https://andrewlock.net/exploring-the-net-core-mcr-docker-files-runtime-vs-aspnet-vs-sdk/
This is what I get when I attempt to pull any of the images:
1709: Pulling from windows/nanoserver
no matching manifest for unknown in the manifest list entries
To clarify, I can currently run all my applications using such images as these:
mcr.microsoft.com/dotnet/core/runtime 2.2-nanoserver-sac2016 4a3bbafea836 3 months ago 1.27GB
mcr.microsoft.com/dotnet/core/sdk 2.2-nanoserver-sac2016 9773d80bdd64 3 months ago 2.62GB
I am looking for clarity on support of these images, or a clearer direction to migrate.
Right now, for LTS, the image you want to pull is: mcr.microsoft.com/dotnet/core/aspnet:2.1. Since 2.1 is the LTS release of ASP.NET Core. The underlying server reference doesn't matter, honestly, and all the .NET Core images are multi-arch, so the right underlying images are pulled automatically (linux for linux host, Windows for Windows host, and AMD64, x86, ARM, etc.).
The OS of the image (aside from being the right architecture and platform) is really kind of meaningless. It's mostly a translation layer. Images aren't VMs, the OS is on the host, and that's where your security patches and such apply. As long as your host is patched up, you're good.
UPDATE
This has apparently led to some pedantic arguments in the comments, so let me be a little more clear. What I'm talking about here is best described via this graphic from the Docker site:
Whereas a VM has a copy of the OS on each instance, containers utilize a shared host OS. The OS base image is basically a proxy. It provides the API, but everything at an OS-level happens on the host OS, not in the container.
As such, yes, the OS base image matters to a certain extent. You can't target a Linux base image and deploy to Windows Server. You'd have issues targeting Windows Server 2019 and deploying to 2016, as well. However, assuming that the OS base image is remotely compatible with the host OS, then everything above and beyond that is meaningless.
Specifically to the discussion of patches and LTS versions, you don't need to care, because again, what's actually running is components of the host OS, not anything from the image itself. You can actually see this if you open Task Manager on the host OS. You'll see duplicate system-level processes tied to each running container. Even though the container shows running processes as well, it is these host-level processes that are actually doing the work, and therefore, it is only important that they are patched and supported. If everything is good on your host, you need not worry about the containers, at least for the OS part of things.
https://github.com/docker/for-win/issues/3761
I was working around Mar 12 when all the docker pulls stopped working because of the changes MS did. So I am sure I saw this page before, but on rereading the entire thing again, I see this comment:
docker pull mcr.microsoft.com/windows/servercore:ltsc2016
That seems like a reasonable tag name for long term support. Lo and behold it works. I am currently theorizing that nanoserver is only for the latest and greatest, and am thinking of opening an issue on github to see if someone will answer that definitively.
I think one of the comments on that page from the github maintainer settles the debate in the Chris Pratt's answer. I think mis-information floating around about security is dangerous, so I am reposting here to help future souls who stumble on this question:
Yes, when running with process-isolation, the version must match the Windows kernel version you're running on. Unlike Linux, the Windows kernel does not have a stable API, so container images running on Windows must have libraries that match the kernel on which they will be running to make it work (which is also why those images are a lot bigger than Linux images).
Vulnerable libraries in a docker container DO matter. You cannot rely on the host OS being up to date to protect you.
Further Research
Still researching this, so adding my updates for your benefit as I find them:
Article about Migrating
https://www.altaro.com/hyper-v/nano-server-no-longer-supported-for-infrastructure/
TLDR - Move to servercore
Server 2016 14393 Tags on Docker Hub
The main docker hub nano server page does not list any 14393 tags, but visiting the full tags list at the bottom of the page shows many. I was able to pull mcr.microsoft.com/windows/nanoserver:10.0.14393.1066 and it is only 1GB instead of 14GB for server core

Configure Hosted Macos image

Is there any way to manually configure a Hosted Macos agent in Azure Devops? I mean, download the agent, install several dependencies, applications, frameworks and utilities to build the application that I need and then upload this image to use it in my agent pool. As I can see, just only predefined agents with predefined softwares are available.
No, it's not able to do this. Software on Microsoft-hosted agents is updated once each month.
For detail software and frameworks on hosted Mac OS image you could refer this link.
Microsoft-hosted agents do not offer:
The ability to log on.
The ability to drop artifacts to a UNC file share.
The ability to run XAML builds.
Potential performance advantages that you might get by using
self-hosted agents which might start and run builds faster.
If Microsoft-hosted agents don't meet your needs, then you should deploy your own self-hosted MacOS agents.

Resources