When I'm about to shutdown my host machine off should I:
vagrant halt
OR
vagrant suspend
What's the difference?
tl;dr
Use vagrant halt when you want to power off your machine, use vagrant suspend when you want to hibernate your machine.
From the Vagrant docs:
vagrant suspend
A suspend effectively saves the exact point-in-time state of the
machine, so that when you resume it later, it begins running
immediately from that point, rather than doing a full boot.
This generally requires extra disk space to store all the contents of
the RAM within your guest machine, but the machine no longer consumes
the RAM of your host machine or CPU cycles while it is suspended.
vagrant halt
This command shuts down the running machine Vagrant is managing.
Which one you want to use?
It is basically up to you - do you have on-going work on the VM (maybe multiple applications opened through GUI, etc) you would prefer to suspend the VM so when you power up the machine, everything is there (Vagrant/VirtualBox would need to store the state of the instance on your hard drive consuming some hard drive space from your host). If you want to start from a clean start as all your process are setup from init, then go for vagrant halt
Example:
If you don't work much on your VM, meaning that all your project files are stored on your host and shared with your VM to see how they reflect using a
LAMP server, then you can safely go with vagrant halt
If when you start your instance, you need to manually start specific processes, or you work on files directly in the VM; then it's better to suspend it so that when you turn it back on, it'll save your session and retrieve the instance in the same state it was in before you suspended it.
Related
Suspending & resuming my virtual-machine does break the k8s deployment
When I suspend with minikube stop and then resume the Virtual Machine with minikube start, Minikube re-deploys my app from scratch.
I see this behaviour with newer versions of Minikube higher than v1.18 (I run on v1.19).
The setup:
The Kubernetes deployment mounts a volume with the source code from my host machine, via hostPath.\
Also I have a container of initContainers that setups the application.
Since the new "redeploy behaviour on resume" happens, the init-container breaks my deploy, if I have work-in-progress code on my host machine..
The issue:
Now, if I have temporary/non-perfectly-running code, I cannot suspend the machine with unfinished work anymore, between working days; because every time I resume it Minikube will try to deploy again but with broken code and fail with an Init:CrashLoopBackOff.
The workaround:
For now, each time I resume the machine I need to
stash/commit my WIP code
checkout the last commit with working deployment
run the deployment & wait for it to complete the initialization (minutes...)
checkout/stash-pop the code saved at point 1).
I can survive, but the workflow is terrible.
How do I restore the old behaviour?
How do I make my deploys to stay untouched, as expected when suspending the VM, instead of being re-deployed every time I resume?
In short words there are two ways to achieve what you want:
On current versions of minikube and virtualbox you can use save state option in Virtual box directly.
Move initContianer's code to a separate job.
More details about minikube + virtual box
I have an environment with minikube version 1.20, virtual box 6.1.22 (from yesterday) and MacOS. Also minikube driver is set to virtualbox.
First with minikube + VirtualBox. Different scenarios:
minikube stop does following:
Stops a local Kubernetes cluster. This command stops the underlying VM
or container, but keeps user data intact.
What happens is virtual machine where minikube is set up stops entirely. minikube start starts the VM and all processes in it. All containers are started as well, so if your pod has an init-container, it will run first anyway.
minikube pause pauses all processes and free up CPU resourses while memory will still be allocated. minikube unpause brings back CPU resources and continues executing containers from a state when they were paused.
Based on different scenarios I tried with minikube it's not achievable using only minikube commands. To avoid any state loss on your minikube environment due to host restart or necessity to stop a VM to get more resources, you can use save state feature in VirtualBox in UI or cli. Below what it does:
VBoxManage controlvm savestate: Saves the current state of the VM to disk and then stops the VM.
Virtual box creates something like a snapshot with all memory content within this snapshot. When virtual machine is restarted, Virtual box will restore the state of VM to the state when the VM was saved.
One more assumption is if this works the same way in v. 1.20 - this is expected behaviour and not a bug (otherwise it would've been fixed already)
Init-container and jobs
You may consider moving your init-container's code to a a separate job so you will avoid any issues with unintended pod restarts and braking your deployment in the main container. Also it's advised to have init-container's code idempotent.
Here's a quote from official documentation:
Because init containers can be restarted, retried, or re-executed,
init container code should be idempotent. In particular, code that
writes to files on EmptyDirs should be prepared for the possibility
that an output file already exists.
This can be achieved by using jobs in Kubernetes which you can run manually when you need to do so.
To ensure following the workflow you can place a check for a Job completion or a specific file on a data volume to the deployment's pod init container to indicate that code is working, deployment will be fine.
Links with more information:
VirtualBox save state
initContainers
kubernetes jobs
Should I run docker-machine stop default every time before I shutdown my Mac? Or is it ok to shutdown with the machine 'running'?
It's (mostly) ok to shutdown your system with "running" machines.
For local machines you will be relying on your VM's normal shutdown behaviour at system shutdown. For externally hosted machines, they will be left running.
With docker-machine on OSX and VirtualBox 5.x, any machines running on VirtualBox VM's will be paused and have their current state saved when the host is shutdown. The machine will be left in this state at system startup until you start them back up (via docker-machine or some VirtualBox method)
docker-machine does not attempt to do anything to your machines on a shutdown signal as it is not a system daemon. docker-machine is a cli utility you manually run to manage machines.
The "mostly" caveat is that some applications really struggle with the time dilation that occurs from pausing a VM. If you do run into issues with your os or apps you could have launchd manage the vm completely so it starts and stops automatically when you login. There is most likely a plist to make launchd run a docker-machine stop default at logoff too.
Docker will receive the shutdown signal and try to shut itself down. It however does not guarantee a graceful shutdown for all of the containers; and it might prevent your mac from shutting down in the process.
Edit
From their source code
// containerStop halts a container by sending a stop signal, waiting for the given
// duration in seconds, and then calling SIGKILL and waiting for the
// process to exit. If a negative duration is given, Stop will wait
// for the initial signal forever. If the container is not running Stop returns immediately.
I have not found any mention that containers will be paused and committed
How can docker run on a Debian host maybe an OpenSUSE in a container? It uses different kernel, with separated modules. Also older Debian versions have used older kernels, so how can run it on a kernel version 3.10+ ? Older kernels have only older built in functions, how can an old distro manage new features?
What is "the trick" in it?
Docker never uses a different kernel: the kernel is always your host kernel.
If your host kernel is "compatible enough" with the software in the container you want to run it will work; otherwise, it won't.
"Containers" Are Just Process Configuration
The key thing to understand is that a Docker container is not a virtual machine: it doesn't create a new virtual computer on which to run the software. Instead, Docker starts processes in your existing OS, just like you start new processes from the command line.
The difference between a "containerized" process and an ordinary process is the restrictions put on the containerized process and the changes to how it sees the environment around it. (These are passed on to any child processes started by the containerized process.) Typical restrictions and changes include:
Instead of using the host's root filesystem, mount a different filesystem on / (usually one supplied with the container's image). Parts of the host filesystem may be mounted underneath the new process' root filesystem, e.g. by using docker run -v /u/myprogram-data:/var/data/myprogram so that when the containerized process reads or writes /var/data/myprogram/file this reads/writes /u/myprogram-data/file in the host filesystem.
Create a separate process space for the containerized process so that it can see only itself and its children (with ps or similar commands), but cannot see other processes running on the host.
Create a separate user namespace so that the users in the container are different from those in the host: e.g., UID 1234 in the containerized process will not be the same as UID 1234 for non-containerized
Create a separate set of network interfaces with their own IP addresses, often using a "virtual router" and address translation between those and the host network interfaces. (E.g., the host, when it receives a packet on port 8080, forwards it to port 80 on the container processes' virtual network interface.)
All of this is done by facilities built into the kernel; you can do any of it yourself without Docker if you write a program to do the appropriate setup and set the appropriate parameters when it starts a new process.
Compatibility
So what does "compatible enough" mean? It depends on what requests the program makes of the kernel (system calls) and what features it expects the kernel to support. Some programs make requests that will break things; others don't. For example, on an Ubuntu 18.04 (kernel 4.19) or similar host:
docker run centos:7 bash works fine.
docker run centos:6 bash fails with exit code 139, meaning it terminated with a segmentation violation signal; this is because the 4.19 kernel doesn't support something that that build of bash tried to do.
docker run centos:6 ls works fine because it's not making a request the kernel can't handle, as bash was.
If you try docker run centos:6 bash on an older kernel, say 4.9 or earlier, you'll find it will work fine. (At least as far as I tested it.)
How can docker run on a Debian host maybe an OpenSUSE in a container
Because the kernel is the same and will support the Docker engine to run all those container images: the host kernel should be 3.10 or more, but its list of system calls is fairly stable.
See "Architecting Containers: Why Understanding User Space vs. Kernel Space Matters":
Applications contain business logic, but rely on system calls.
Once an application is compiled, the set of system calls that an application uses (i.e. relies upon) is embedded in the binary (in higher level languages, this is the interpreter or JVM).
Containers don’t abstract the need for the user space and kernel space to share a common set of system calls.
In a containerized world, this user space is bundled up and shipped around to different hosts, ranging from laptops to production servers.
Over the coming years, this will create challenges.
From time to time new system calls are added, and old system calls are deprecated; this should be considered when thinking about the lifecycle of your container infrastructure and the applications that will run within it.
See also "Why kernel version doesn't match Ubuntu version in a Docker container?":
There's no kernel inside a container. Even if you install a kernel, it won't be loaded when the container starts. The very purpose of a container is to isolate processes without the need to run a new kernel.
I have an OS which doesn't shut down properly when I run it in one particular hypervisor (KVM) even though it works on all other hypervisors. Instead, what it does is sync all data to disk and then hang indefinitely with the message "Hit any button to reboot" until you issue a hard shutdown from the hypervisor.
I'm trying to automate a no-touch installation of this OS from an .iso file into a .box file using Packer. However, the Packer run fails every time because it hits the shutdown_timeout (from the QEMU builder) while the OS is hung waiting for input. I'm looking for a workaround -- it seems like either of the following could work (and maybe there are other ways), but I can't figure out any way to do them! Some ideas I've searched for were:
tell Packer to do a hard shutdown after a certain amount of time
tell Packer that hitting this timeout isn't an error and it should just do a hard shutdown and continue with the provisioner steps
Upon rereading the docs, I found the answer:
shutdown_command (string) - The command to use to gracefully shut down the machine once all the provisioning is done. By default this is an empty string, which tells Packer to just forcefully shut down the machine.
D'oh!
Can anyone recommend an automated backup solution that can handle VMWare instances?
I would like something to run overnight, suspend any running virtual machines, back up the files over the network (or hand off to another backup job), and (optionally) resume any VMs that it suspended.
A free/open source solution would be ideal, but I'll pay for a closed solution if necessary.
You could do this with a scheduled task and a script - Workstation is pretty easy to automate from the command line.
Psuedocode for the script:
for each VM {
vmrun.exe suspend <path_to_.vmx>
copy <path_to_vm>\*.vmdk \\backup-server\vmbackups\<vmname>\
vmrun.exe start <path_to_.vmx>
}
There's some more plumbing to be done, but once you have a working backup script you can schedule it or run it whenever you like. If you get your VM information from vmrun.exe list you don't have to worry about adding more running VMs or anything. Hope that gets you started.