More swap space for Docker on Mac OSX Yosemite - oracle

I am trying to add more swap space in docker in order to avoid this error installing oracle database:
This system does not meet the minimum requirements for swap space.
Based on the amount of physical memory available on the system, Oracle
Database 11g Express Edition requires 2048 MB of swap space. This
system has 1023 MB of swap space. Configure more swap space on the
system and retry the installation.
I am following the instructions commented here:
https://forums.docker.com/t/docker-for-mac-configure-swap-space/20656/2
but when I execute mkswap I get "command not found":
mkswap /var/swap.file
Any idea?

Docker for Mac runs an Alpine Linux VM to host containers.
This is a prebuilt boot image that is designed for ease of use, and also updates over time so it can be hard to customise some times as most config is reset when you reboot it.
In this case you can persist a swap file change, but config like this has the possibility of changing between versions without notice. You might be better off running a custom VM for this so your swap configuration hangs around.
Docker for Mac 17.06.0
Swap is controlled by the do_swapfile function in the /etc/init.d/automount init script in the VM. If the swap file exists, it will be used as is. As the swap file is stored in /var it is persisted across reboots and can be manually customised.
Attach to the VM's tty from your mac with screen (brew install screen if you don't have it)
screen ~/Library/Containers/com.docker.docker/Data/com.docker.driver.amd64-linux/tty
Then in the VM, replace the existing swap file with a new one of the required size and reboot the box. The size of the file is the block size bs * count.
swapoff -a
dd if=/dev/zero of=/var/spool/swap bs=1k count=2097152
chmod 600 /var/spool/swap
mkswap /var/spool/swap
reboot
When the VM has rebooted, you should be able to connect again and see the new size of the VM's Swap space with free.
$ screen ~/Library/Containers/com.docker.docker/Data/com.docker.driver.amd64-linux/tty
/ # free
total used free shared buffers cached
Mem: 3526164 389952 3136212 165956 20968 208160
-/+ buffers/cache: 160824 3365340
Swap: 2097148 0 2097148

Related

How to enable swap/swapfile on Google container optimized OS on GCE?

Using the cos-stable container optimized OS on GCE. Micro instance so ram is pretty sparse. Tried to enable swap to prevent locking due to OOM during docker pulls, but can't get it to work.
I realize most folders are stateless, so I put the swapfile under home:
sudo fallocate -l 1G /home/user/swapfile
sudo chmod 600 /home/user/swapfile
sudo mkswap /home/user/swapfile
results in:
Setting up swapspace version 1, size = 1024 MiB (1073737728 bytes)
no label, UUID=6e965805-2ab9-450f-aed6-577e74089dbf
But sudo swapon /home/user/swapfile gives the error:
swapon: /home/user/swapfile: swapon failed: Invalid argument
Any ideas how to enable swap on COS?
Disk based swap is disabled in the COS image.
You can enable disk based swap with
sysctl vm.disk_based_swap=1
I have the following in my cloud-init:
bootcmd:
- sysctl vm.disk_based_swap=1
- fallocate -l 1G /var/swapfile
- chmod 600 /var/swapfile
- mkswap /var/swapfile
- swapon /var/swapfile
Swap is not supported in container optimized OS
Swap would effectively destroy much of the behavioral isolation Google offers between containers.
Guaranteed pods should never require swap. Burstable pods should have their requests met without requiring swap. BestEffort pods have no guarantee.
I highly suggest you use a bigger instance as a f1-micro only has 600MB of RAM and you still need to run the OS on the instance it addition with your containers

Resizng data disk on Alicloud

I have a production server running CentOS 6.9 on Alicloud in China. Instance is of ecs.sn1.3xlarge type. Recently one of my data disk became filled-up. So I decided to resize the volume and followed the step by step instructions available on this page: https://www.alibabacloud.com/help/doc-detail/25452.html.
Here are steps that I followed:
Resized disk form console
Rebooted system (Rebooting system didn't resized/populated disk on system)
umount disk
Run fdisk on desired disk
e2fsck -f /dev/vdb1 # check the file system
resize2fs /dev/vdb1 # resize the file system
Thank you in Advance

Resize Virtualbox Ubuntu VM storage not taking effect

I followed these instructions to resize my VirtualBox Ubuntu VM on Mac:
http://osxdaily.com/2015/04/07/how-to-resize-a-virtualbox-vdi-or-vhd-file-on-mac-os-x/
This is after the change:
*****-M-D2KA:$ VBoxManage showhdinfo ~/VirtualBox\ VMs/P4_Runtime/P4_Runtime.vdi
UUID: ce0ccd77-f265-46cd-9679-e25e64f1c992
Parent UUID: base
State: locked read
Type: normal (base)
Location: /Users/*****/VirtualBox VMs/P4_Runtime/P4_Runtime.vdi
Storage format: VDI
Format variant: dynamic default
Capacity: 25000 MBytes
Size on disk: 9967 MBytes
Encryption: disabled
In use by VMs: P4_Runtime (UUID: 5ea52b11-997f-45d8-b7d6-effa37a3b649) [Snapshot 1 (UUID: 409c1035-2134-4532-a931-a29018d33dc6)]
Child UUIDs: 540ae750-5307-44ef-a313-95134ae353b7
165fe99e-490d-4dd9-9602-00e3aaa8f82c
But for some reason, it does not seem to take effect:
This is the "df -k" output in the VM, and I get "No space left on device" error:
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/sda 10253588 9713020 0 100% /
What am I missing?
I found out what I missed. I used gparted to resize the partition.
Resizing the VHD doesn't change the size of the partition /dev/sda. You can run lsblk inside the guest to see the additional space. To get the extra space that is available in the guest OS, you may
Use something like gparted as mentioned here. Instructions to do that on VHD can be found here. Note that this might not be easiest, but you may be forced to if you plan to not move some of your mount points (Example if you're not ready to move /home/ to a new partition).
Or, create a new partition, again instructions on how to do is present here. I would prefer this option over the first.

How to set Docker's system memory?

I'm using Docker 1.13.1 for Mac. The Docker client allows you to change the amount of memory provided to Docker using a simple slider interface.
How can I set this value via docker's command line utility?
For added clarity, this is not per container memory, this is the value of "Total Memory" that's returned when you run docker info.
Thank you
With docker (at least version 18.03.1) the settings for the VM are maintained in a special file located at:
/Users/<username>/Library/Group\ Containers/group.com.docker/settings.json
If you close docker you can edit it directly from the command line using sed, for example the command below will replace the 2 GB limit with a 10GB limit, and create a backup file of the original settings at settings.json.bak
sed -i .bak 's/2048/10240/g' /Users/`id -un`/Library/Group\ Containers/group.com.docker/settings.json
When docker restarts, it will now have 10 GB.
On a Mac, Docker actually runs as a Hyperkit virtual machine. The docker command line utility just interfaces with the docker daemon process running inside that virtual machine.
If you run ps auxwww | grep hyperkit on your Mac, you'll see the hyperkit process running with the amount of memory passed as an argument. This is controlled by the Docker Mac client, and I imagine the saved value is stored in a .plist file somewhere.
In order to modify that on the command line, you'd need to find where the Docker Mac client stores the data, modify it, and restart the hyperkit process.

custom Linux kernel build failure in vmware workstation

While trying to compile/build and boot custom kernel inside vmware workstation, while booting new kernel, it fails and falls to shell with error "failed to find disk by uuid".
I tried this with both ubuntu and centos.
Things I tried but didn't help
check mapping by uuid in boot entry and existence in directory.
initramfs-update
replaced root=uuid=<> with /dev/disk/sda3
is it issue with vmware workstation?
how can it be rectified..??
I had a similar fault with my own attempts to bootstrap Fedora 22 onto a blank partition using a Centos install on another partition. I never did solve it completely, but I did find the problem was in my initrd rather than the kernel.
The problem is the initrd isn't starting LVM because dracut didn't tell the initrd that it needs LVM. Therefore if you start LVM manually you should be able to boot into your system to fix it.
I believe this is the sequence of commands I ran from the emergency shell to start LVM:
vgscan
vgchange -ay
lvs
this link helped me remember
Followed by exit to resume normal boot.
You might have to mount your LVM /etc/fstab entries manually, I don't recall whether I did or not.
Try this:
sudo update-grub
Then:
mkinitcpio -p linux
It won't hurt to check your fstab file. There, you should find the UUID of your drive. Make sure you have the proper flags set in the fstab.
Also, there's a setting in the grub.cfg that has has GRUB use the old style of hexadecimal UUIDs. Check that out as well!
The issue is with creation of initramfs, after doing a
make oldconfig
and choosing default for new options, make sure the ENOUGH diskspace is available for the image to be created.
in my case the image created was not correct and hence it was failing to mount the image at boot time.
when compared; the image size was quite less than the existing image of lower version, so I added another disk with more than sufficient size and then
make bzImage
make modules
make modules_install
make install
starts working like a charm.
I wonder why the image creation got completed earlier and resulted in corrupt image (with less size) without throwing any error [every single time]

Resources