I am starting to understand Docker and as far as I am aware the docker container runs on the default Linux dist where the container is installed - in my case it's a Mac OS X lightweight dist that comes with docker toolbox.
So why do I see many Docker files actually installing a distrib inside the container, does this not defeat the object of keeping things light?
For example, here is one Docker file starting with:
FROM debian:jessie
so this is installing a Docker image inside the container which is based on Debian.
I also see many others using Ubuntu, for example.
Can this step not be bypassed and software installed directly in the container use the underlining Linux dist where the container is installed?
Because, just as for physical or virtual machines, setting up a userland environment is going to be a pain without a distribution.
This is, IMO, one of the strong benefits of docker: Pick the most suitable distribution for a particular application.
A containerized application is probably going to have dependencies. To install these dependencies, it helps a lot to have a package manager. Some dependencies are also included by default in many distributions, which makes it a good idea for the container creator (application) to choose its own distribution.
Additionally, remember that packaging a whole distribution does not necessarily waste a lot of resources:
Docker images are stored as deltas against a common baseline, meaning that two images based on debian:jessie could reuse the same data for the baseline.
Distributions are actually not that large, as they are usually minified versions of the full system images.
If you really want to create a minimal image, try busybox. However, you will oftentimes find yourself outgrowing it quite fast for any real world container image.
Related
I'm writing a Windows application that interacts with a WSL distribution/instance which has Docker running. Also, in some cases, my application will also run commands/processes directly in the WSL/Linux distribution.
I would like for that WSL distribution to be "managed" by my application. By that, I mean that:
The distribution should be installed by my application so that users don't have to have knowledge of WSL itself. If the users have to install the WSL distribution themselves, it's entirely possible that they could misconfigure it. It's also possible that some users might not be able to get it up and running in the first place.
The user should have no control over my application's managed WSL distribution. They would not be able to:
Shutdown the instance when it is running under my application.
Uninstall the instance without uninstalling my application.
Preferably the user would not even see the distribution at all.
Can I create/install a distribution that is managed by my application in this way?
It's obvious that I could simply run a batch script, import the instance, and then just run it like that. But this seems verbose and, as mentioned, would still be visible to the end user.
Well, I definitely can't offer you a direct answer that meets those requirements, and I don't think it's possible, at least not currently.
WSL distributions are always visible to the end user. And they can always be --terminate'd or even --unregister'd. Even Docker Desktop's distribution are subject to these limitations.
You can see the Docker Desktop distros with wsl.exe -l -v, returning something like:
NAME STATE VERSION
* Tumbleweed Stopped 2
...
Ubuntu-22.04 Running 2
Artix-dinit Stopped 2
docker-desktop Stopped 2
docker-desktop-data Stopped 2
...
side-note: I have entirely too many distributions ;-)
But ... Docker Desktop does overwrite the distro with a new version when you upgrade, so any changes to it will (at least eventually) be overwritten.
So that does provide a possibility:
Ship your managed distro in tar form in/with your application.
When starting the application for the first time, create (--import) the distribution from the tarball.
Make sure that nothing in the distribution writes to/modifies the filesystem. You might could even set it read-only in some way, but I haven't tried that.
Perform a checksum of the distribution vhdx when starting each time, and confirm that it hasn't been modified.
If it has been modified, then delete it and re-import.
Alternatively, as mentioned in the comments, there may be a way to do this with a Hyper-V VM (but only, of course, on Windows Professional or higher). The WSL2 VM itself is hidden and managed in some way, and the same appears to be the case with BlueStacks.
I'm still not sure how they do this. From watching the Event Viewer, it appears that a new Hyper-V "Partition" is created when starting WSL2. This does not appear to be related to any "disk partitioning", so I believe that it is some type of Hyper-V partition that is hidden from the user.
Inside a CI/CD environment, I have a tar.gz file that I need to package into a virtual machine image. I want to take Ubuntu Server installation, install some packages, install my tar.gz file, and output the image into various both EC2/AMI and VMware OVF formats (and possibly others in the future, i.e. docker images).
I've been looking at Packer, Vagrant, and Ansible. But I'm not certain which of these tools will help me accomplish what I need.
Packer sounds like the right solution, but the documentation isn't very clear on how to start with a VMware OVF/OVA image and build an EC2/AMI image. Or am I able to start with a Docker image and output an EC2/AMI image??? Based on the docs, it seems like I need to start with AMI and build an AMI. Or start with ".vmx" (it doesn't actually say anything about OVF/OVA files) and build an OFA/OVF. But can I start with format A and end up with format B?
Is Vagrant or Ansible better for this??
Here is what I've gathered.
I found a nicely documented approach that starts with an Ubuntu ISO image, uses a preseed file for unattended Ubuntu installation, and deploys it out to vSphere as a template:
https://www.thehumblelab.com/automating-ubuntu-18-packer/
In order to deploy it out to vSphere, it requires a plugin from Jetbrains. It uses vSphere/ESXi to build the template image. Packer docs also discuss "Building on a Remote vSphere Hypervisor" using remote_* key words in the json file. But I suppose this accomplishes the same thing.
Then to build the EC2 AMI images, I believe you add another builder to the json. However, I don't believe you can start from the same ISO image as done in the VMware builder. Instead, I believe you need to start with a prebuilt AMI image (specified by the source_ami field in the json).
I guess Packer doesn't allow you to start from a single source A and fan out to target formats B, C, etc... If you want to build an AMI, you need to start with an AMI. If you want to build VMware images, you need to start from an ISO or .vmx (I suppose this means OVF) and build out an OVF/OVA or template.
I'm a junior web developer working in a small web agency. We work on windows 10, with wampserver and mainly with Prestashop, Wordpress and Symfony websites. For every task I am given a "ticket" for which I must develop on a new branch (if needed). When the work is done I merge my branch on the develop branch which is hosted on a preproduction server, and if it is considered ok, it is then merged on the master branch, which is the website in production hosted on another server.
I was given the task to do some research on Docker and to find how it could improve our workflow.
But from what I understood yet, docker containers are just similar to Virtual Machines and are only useful for building isolated environments in order to test an application without having to think about dependencies.
But given the fact that we already have a preproduction server, I don't see the point of using Docker ? Did I miss something ?
Also, could Docker be of use for sharing projects between teamworkers (we all work on Windows) ? (for example a developer is working on a website locally, can he create a container and its image that could be used instantly as it is without configurations by another developer to work on it on his part ?)
But from what I understood yet, docker containers are just similar to Virtual Machines and are only useful for building isolated environments in order to test an application without having to think about dependencies.
No, Docker containers are not only useful for testing.
When building a correct workflow with Docker you can achieve 100% parity between development, staging and production if all use the same docker images.
But given the fact that we already have a preproduction server, I don't see the point of using Docker ? Did I miss something ?
This pre production server, aka what is normally called a staging server should also use docker to run the code.
Also, could Docker be of use for sharing projects between teamworkers (we all work on Windows) ? (for example a developer is working on a website locally, can he create a container and its image that could be used instantly as it is without configurations by another developer to work on it on his part ?)
Yes... You create the base docker images with only the necessary stuff to run in production and from them you can build other docker images with the necessary developers tools.
Production image company/app-name:
FROM php:alpine
# other your production dependencies here
Assumming that ones build the docker image with name company/app-name then the image for development.
Development image company/app-name-dev:
FROM company/app-name
# add here only developer tools
Now the developer uses both images company/app-name and company/app-name-dev during development and in the staging server only company/app-name docker image will be used to run the code.
After soem months of interaction in this flow you may even fill confident to start using the company/app-name to deploy the app to production and now you are with a 100% parity between development, staging and production.
Take a look to the Php Docker Stack for some inspiration, because I built it with this goal in mind for my latest job, but I end up to leave the company when we where in the process of adopting it in development...
But don't put all services you need in one single docker image because that is a bad practice in docker, instead use one service per docker image, like one service for PHP, another for the database, another for the Nginx server, etc.. See here how several services can be composed together with docker compose.
I need some straight answers about this as the current docker info and general Web info mixes hyperv and vmware info up.
I have installed docker on my windows 10 pro machine. I do not have vmware/virtual box installed I don't need it I have hyperv? I can use docker on a Linux Ubuntu box fairly well and I (think!) I understand volumes. Sorry for the background...
I am developing a node app and I simply want to have a volume within my Linux container mapped to a local directory on my windows machine, this should be simple but everytime I run my (lets say alpine linux) container '-v /c/Users:/somedata' the directory within the Linux /somedata directory is empty?
I just don't get this functionality on Windows? If you have a decent link I would be very grateful as I have been going over the docker info for two days and I feel I am nowhere!
If volumes are not supported between Windows and Linux because of the OS differences would the answer be to use copy within a Docker file? And simply copy my dev files into the container being created?
MANY MANY THANKS IN ADVANCE!
I have been able to get a link to take place, but I don't really know what the rules are, yet.
(I am using Docker for windows 1.12.0-beta21 (build: 5971) )
You have to share the drive(s) in your docker settings (this may require logging in)
The one or two times I've gotten it to work, I used
-v //d/vms/mysql:/var/lib/mysql
(where I have a folder D:\vms\mysql)
(note the "//d" to indicate the drive letter)
I am trying to reproduce this with a different setup, though, and I am not having any luck. Hopefully the next release will make this even easier for us!
Is there a way to run an Amazon EC2 AMI image in Windows? I'd like to be able to do some testing and configuration locally. I'm looking for something like Virtual PC.
If you build your images from scratch you can do it with VMware (or insert your favorite VM software here).
Build and install your linux box as you'd like it, then run the AMI packaging/uploading tools in the guest. Then, just keep backup copies of your VM image in sync with the different AMI's you upload.
Some caveats: you'll need to make sure you're using compatible kernels, or at least have compatible kernel modules in the VM, or your instance won't boot on the EC2 network. You'll also have to make sure your system can autoconfigure itself, too (network, mounts, etc).
If you want to use an existing AMI, it's a little trickier. You need to download and unpack the AMI into a VM image, add a kernel and boot it. As far as I know, there's no 'one click' method to make it work. Also, the AMI's might be encrypted (I know they are at least signed).
You may be able to do this by having a 'bootstrap' VM set up to specifically extract the AMI's into a virtual disk using the AMI tools, then boot that virtual disk separately.
I know it's pretty vague, but those are the steps you'd have to go through. You could probably do some scripting to automate the process of converting AMI's to vdks.
The Amazon forum is also helpful. For example, see this article.
Oh, this article also talks about some of these processes in detail.
Amazon EC2 with Windows Server - announced this morning, very exciting
http://aws.amazon.com/windows/
It's a bit of a square peg in a round hole ... kind of like running MS-Office on Linux.
Depending on how you value your time, it's cheaper to just get another PC and install Linux and Xen.