Have a few applications where EC2 small instances are, well, too large. So the announcement of micro instances is just what the doctor ordered.
I'd like to take a small instance's EBS volume, detach it, and pair it up with a micro instance. At some point it might be great to go the other way and upsize a micro instance to a small or beyond.
For this failed experiment I tried:
Creating a new small instance with the Alestic Ubuntu 10.04 32 bit AMI (ami-1234de7b). Boots like a charm.
Power down my freshly minted micro instance, detach the volume that was created for me in the previous step.
Attach the small instance's volume to the micro instance.
Power up.
Nada.
What's odd is there is no console log output until I power down. Then I see it all.
[ 0.000000] Reserving virtual address space above 0xf5800000
[ 0.000000] Initializing cgroup subsys cpuset
[ 0.000000] Initializing cgroup subsys cpu
...
[ 1.221261] VFS: Mounted root (ext3 filesystem) readonly on device 8:1.
[ 1.221261] VFS: Mounted root (ext3 filesystem) readonly on device 8:1.
[ 1.222164] devtmpfs: mounted
[ 1.222202] Freeing unused kernel memory: 216k freed
[ 1.223409] Write protecting the kernel text: 4328k
[ 1.223760] Write protecting the kernel read-only data: 1336k
init: console-setup main process (63) terminated with status 1
%Ginit: plymouth main process (45) killed by SEGV signal
init: plymouth-splash main process (196) terminated with status 2
cloud-init running: Thu, 09 Sep 2010 17:37:54 +0000. up 2.61 seconds
mountall: Disconnected from Plymouth
init: hwclock-save main process (291) terminated with status 1
Checking for running unattended-upgrades: * Asking all remaining processes to terminate...
[80G
[74G[ OK ]
* All processes ended within 1 seconds....
[80G
[74G[ OK ]
* Deconfiguring network interfaces...
[80G
[74G[ OK ]
* Deactivating swap...
[80G
[74G[ OK ]
* Unmounting local filesystems...
[80G
[74G[ OK ]
* Will now halt
[ 185.599636] System halted.
This method of swapping has worked well between same sized instanced in the past and it's my first attempt at doing this between different sizes. Is this just not possible or am I missing something fundamental in my EC2 knowledge?
Even though this will probably be migrated to Server Fault, I ran into the exact same problem with this instance earlier today.
It appears that this image assumes that there will be ephemeral storage present, when there is none on the micro instances. To work around this, comment out the following line in /etc/fstab:
/dev/sda2 /mnt auto defaults,comment=cloudconfig 0 0
This should prevent the instance from hanging on startup, or at least it did for me with ami-1234de7b.
I created a new micro instance using alestic ami's (ami-2c354b7e). I was able to login to the system normally the first time. But once I reboot the system, I was not able to login again.
commenting the line indicated above worked for me. "/dev/sda2 /mnt auto defaults,comment=cloudconfig 0 0"
Commenting the line out doesn't fix it fully. If you reboot, it will write the same line back in. You need to:
$ l="deb http://archive.ubuntu.com/ubuntu lucid-proposed main"
$ echo "$l" | sudo tee -a /etc/apt/sources.list
$ sudo apt-get update && sudo apt-get install cloud-init
$ dpkg-query --show cloud-init
I'm assuming this will be fixed in the official Ubuntu release soon and you won't have to do this, but for now...
Source: https://bugs.launchpad.net/ubuntu/+source/cloud-init/+bug/634102
Also, we have a couple images based off the official Ubuntu AMI's that work on Micro's: http://blog.simpledeployr.com/2010/09/new-ruby-amis-with-latest-ubuntu-lucid.html
I don't see a problem on your side. This could be a problem in Amazon's infrastructure.
Related
I'm trying to limit resources by using cgroup. It's working fine until I reboot the instance.
I had checked and found that the cgroup was removed for some reason. This is my step to creating the cgroup:
# Create a cgroup
mkdir /sys/fs/cgroup/memory/my_cgroup
# Add the process to it
echo $PID > /sys/fs/cgroup/memory/my_cgroup/cgroup.procs
# Set the limit to 40MB
echo $((40 * 1024 * 1024)) > /sys/fs/cgroup/memory/my_cgroup/memory.limit_in_bytes
I'm using AMI RHEL-7.5_HVM-20180813-x86_64, kernel version 3.10.0-862.11.6.el7.x86_64.
Could you guys help me out with this problem?
Thanks in advance.
It seems like cgroup config is not persistant across reboot. I personally am not very familar and can't test but you can have a look at this. You need to configure cgconfig to persist your changes.
I've set up a raspberry pi to execute a command if a USB stick is inserted, and the command calls an executable on the stick.
This works about 80% of the time, but intermittently fails - seemingly at random. Because of the unpredictability I assume this is a race condition, however I'm not too familiar with where the risks are as I've pieced together the approach from information online. Most of the information comes from here.
The USB stick is auto-mounted with the following entry in /etc/fstab. I'm aware of the risk of /dev/sda1 changing but that does not appear to be the problem here:
/dev/sda1 /media/usb vfat defaults,rw,nofail,user,umask=000 0 0
A service waits for the USB to mount with the following configuration
[Unit]
Description=USB Mount Trigger
Requires=media-usb.mount
After=media-usb.mount
[Service]
ExecStart=/script.sh
[Install]
WantedBy=media-usb.mount
media-usb.mount comes from systemctl list-units -t mount, and /script.sh calls the USB stick's executable.
In failure cases, where the USB's executable is not called, I see the following from systemctl status service_name:
Nov 15 22:49:14 raspberrypi systemd[1]: Dependency failed for USB Mount Trigger.
Nov 15 22:49:14 raspberrypi systemd[1]: service_name.service: Job service_name.service/start failed with result 'dependency'.
In these cases if I execute systemctl list-units -t mount I do not see media-usb.mount and my USB stick is not mounted to /media/usb.
I think that an error / race condition in service_name.service causing the USB mount to die, because (I believe) a successful mount is required to trigger the service. If the USB is never inserted, systemctl status service_name simply reports Active: inactive (dead), so something is triggering the service to try to execute.
I am following the CoreOS in Action book (and also CoreOS online instruction) to bring up a 3-node cluster using Vagrant and VirtualBox on MacOS.
It all goes fine, machines come up & running and I can ssh into one of them, but it looks like the boxes brought up are missing fleetctl (which makes no sense, as it's such a core component of CoreOS):
$ vagrant ssh core-01 -- -A
Last login: Thu Mar 1 21:28:58 UTC 2018 from 10.0.2.2 on pts/0
Container Linux by CoreOS alpha (1702.0.0)
core#core-01 ~ $ fleetctl list-machines
-bash: fleetctl: command not found
core#core-01 ~ $ which fleetctl
which: no fleetctl in (/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/opt/bin)
What am I doing wrong?
I have changed the number of instances to 3, created a new "discovery token URL" and updated the user.data file; Googling around I seem to be the one and only person having this problem.
Thanks in advance for any suggestions you may have!
PS -- yes, I have tried (several times!) to vagrant destroy and rebuild the cluster: even nuked the repo and re-cloned it. Same issue every time.
The answer is going to make you a bit sad, here it is:
CoreOS no longer support fleet. It's gone. Ciao :(
https://coreos.com/blog/migrating-from-fleet-to-kubernetes.html
To this end, CoreOS will remove fleet from Container Linux on February 1, 2018, and support for fleet will end at that time. fleet has already been in maintenance mode for some time, receiving only security and bugfix updates, and this move reflects our focus on Kubernetes and Tectonic for cluster orchestration and management.
You are using Coreos 1702.0.0, fleet has been removed since Coreos 1675.0.1 https://coreos.com/releases/
I am trying to add more swap space in docker in order to avoid this error installing oracle database:
This system does not meet the minimum requirements for swap space.
Based on the amount of physical memory available on the system, Oracle
Database 11g Express Edition requires 2048 MB of swap space. This
system has 1023 MB of swap space. Configure more swap space on the
system and retry the installation.
I am following the instructions commented here:
https://forums.docker.com/t/docker-for-mac-configure-swap-space/20656/2
but when I execute mkswap I get "command not found":
mkswap /var/swap.file
Any idea?
Docker for Mac runs an Alpine Linux VM to host containers.
This is a prebuilt boot image that is designed for ease of use, and also updates over time so it can be hard to customise some times as most config is reset when you reboot it.
In this case you can persist a swap file change, but config like this has the possibility of changing between versions without notice. You might be better off running a custom VM for this so your swap configuration hangs around.
Docker for Mac 17.06.0
Swap is controlled by the do_swapfile function in the /etc/init.d/automount init script in the VM. If the swap file exists, it will be used as is. As the swap file is stored in /var it is persisted across reboots and can be manually customised.
Attach to the VM's tty from your mac with screen (brew install screen if you don't have it)
screen ~/Library/Containers/com.docker.docker/Data/com.docker.driver.amd64-linux/tty
Then in the VM, replace the existing swap file with a new one of the required size and reboot the box. The size of the file is the block size bs * count.
swapoff -a
dd if=/dev/zero of=/var/spool/swap bs=1k count=2097152
chmod 600 /var/spool/swap
mkswap /var/spool/swap
reboot
When the VM has rebooted, you should be able to connect again and see the new size of the VM's Swap space with free.
$ screen ~/Library/Containers/com.docker.docker/Data/com.docker.driver.amd64-linux/tty
/ # free
total used free shared buffers cached
Mem: 3526164 389952 3136212 165956 20968 208160
-/+ buffers/cache: 160824 3365340
Swap: 2097148 0 2097148
I have an Amazon EC2 instance. I booted up a volume and attached it to /dev/sdj. I edited my fstab file to have the line
/dev/sdj /home/ec2-user/mydirectory xfs noatime 0 0
Then I mounted it (sudo mount /home/ec2-user/mydirectory)
However, running the "mount" command says the following:
/dev/xvdj on /home/ec2-user/mydirectory type xfs (rw,noatime)
What? Why is it /dev/xvdj instead of /dev/sdj?
The devices are named /dev/xvdX rather than sdX in 11.04. This was a kernel change. The kernel name for xen block devices is 'xvd'. Previously Ubuntu carried a patch to rename those devices as sdX. That patch became problematic.
https://askubuntu.com/a/47909