I'm launching a fresh Amazon EC2 image running Amazon Linux (Amazon Linux AMI 2014.03.2 (HVM)). I'm using a larger type (e.g. m3.large) which comes with 1 x 32 (SSD) Instance Storage according to the launch page.
However, I'm not able to find that storage anywhere. In documentation that I found it mentioned that it should be listed with lsblk and mounted in /media, but /media is empty and lsblk only gives me the 8GB root disk:
$ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
xvda 202:0 0 8G 0 disk
└─xvda1 202:1 0 8G 0 part /
Anyone an idea how to access my 'default' instance storage that I get with my instance?
I have since received a solution via the AWS Forums. See here: https://forums.aws.amazon.com/thread.jspa?messageID=565737
Related
The exfat partition of my external drive is not mounted automatically on my mac.
I created a /mytest directory in the /Volumes directory and tried to mount it, but I get the following error and cannot mount it.
This works for everything else, but I cannot mount this partition.
How can I get it mounted?
sh-3.2# diskutil list rdisk7
/dev/disk7 (external, physical):
#: TYPE NAME SIZE IDENTIFIER
0: GUID_partition_scheme *8.0 TB disk7
1: Microsoft Reserved 16.8 MB disk7s1
2: Microsoft Basic Data myexfhdtosi 4.4 TB disk7s2
3: Microsoft Basic Data 1.6 TB disk7s3
4: Apple_HFS myhfshddtosi 1000.0 GB disk7s4
sh-3.2# mount -v -t exfat /dev/rdisk7s2 /Volumes/mytest
mount_exfat: /dev/rdisk7s2 on /Volumes/mytest: Block device required
mount: /Volumes/mytest failed with 71
Try killing the fsck service and mount again.
sudo pkill -f fsck
See this thread for detail discussion.
I am trying to deploy the all-in-one configuration using kolla-ansible with ceph enabled
enable_ceph: "yes"
#enable_ceph_mds: "no"
enable_ceph_rgw: "yes"
#enable_ceph_nfs: "no"
enable_ceph_dashboard: "{{ enable_ceph | bool }}"
#enable_chrony: "yes"
enable_cinder: "yes"
enable_cinder_backup: "yes"
glance_backend_ceph: "yes"
gnocchi_backend_storage: "{{ 'ceph' if enable_ceph|bool else 'file' }}"
cinder_backend_ceph: "{{ enable_ceph }}"
cinder_backup_driver: "ceph"
nova_backend_ceph: "{{ enable_ceph }}"
And, my setup consists of a Virtual Box VM with Ubuntu 18.04.4 desktop version with 2 CPU cores, 30 GB Disk (single disk), 2GB RAM, the partitioning type is msdos.
ansible version==2.9.7
kolla-ansible version==9.1.0
In order to install ceph OSD using kolla-ansible i read that a partition should have the name KOLLA_CEPH_OSD_BOOTSTRAP_BS.
Hence, i created root partition with 20 GB i.e /dev/sda1 and then an extended partition /dev/sda2 for the rest 20GB and followed by two logical partitions (/dev/sda5 and /dev/sda6) each of 10GB for OSD. But in msdos type partitioning there is no feature to allocate name to partitions.
So my questions are:
How do I go about labeling the partition in case of msdos type partition in order for kolla-ansible to recognize that /dev/sda5 and /dev/sda6 is designated for Ceph-OSD ?
Is it a must to have a separate storage drive than the one containing Operating System for Ceph OSD (i know its not recommended to have all in single disk) ?
How do I have to provision my single drive HD space in order to install Ceph-OSD using kolla-ansible?
P.S: I also tried to install ceph using kolla-ansible using an OpenStack VM (4 CPU cores, 80GB disk space - single drive, as i didn"t install Cinder in my OpenStack infra.) and Ubuntu 18.04.4 Cloud image, which has GPT partition type and supports naming partitions. The partitions were as follows:
/dev/vda1 for root partition
/dev/vda2 for ceph OSD
/dev/vda3 for ceph OSD
But the drawback was that, kolla-ansible wiped out complete disk and resulted in failure in installation.
Any help is highly appreciated. Thanks a lot in advance.
I also had installed an Kolla-Ansible single-node all-in-one with Ceph as storage backend, so I had the same problem.
Yes, the bluestore installation of the ceph doesn't work with a single partition. I had also tried different ways of labeling, but for me it only worked with a whole disk, instead of a partition. So for your virtual setup create a whole new disk, for example /dev/vdb.
For labeling, I used the following as bash-script:
#!/bin/bash
DEV="/dev/vdb"
(
echo g # create GPT partition table
echo n # new partiton
echo # partition number (automatic)
echo # start sector (automatic)
echo +10G # end sector (use 10G size)
echo w # write changes
) | fdisk $DEV
parted $DEV -- name 1 KOLLA_CEPH_OSD_BOOTSTRAP_BS
Be aware, that DEV at the beginning is correctly set for your setup. This creates a new partiton table and one partition on the new disc with 10GB size. The kolla-ansible deploy-run register the label and wipe the whole disc, so the size-value has nothing to say and is only for the temporary partition on the disc.
One single disc is enough for the Ceph-OSD in kolla-ansible. You don't need a second OSD. For this, add the following config-file in your kolla-ansible setup in this path, when you used the default kolla installation path: /etc/kolla/config/ceph.conf with the content:
[global]
osd pool default size = 1
osd pool default min size = 1
This is to make sure, that there is only one OSD requested by kolla-ansible. If your kolla-directory with the globals.yml is not under /etc/kolla/, you have to change the path of the config-file as well.
Solution for setup with one single disc with multiple partitions is to switch the storage-type of the ceph-storage in the kolla-ansible setup from bluestore to the older filestore OSD type. This also requires different partition-labels like written here: https://docs.openstack.org/kolla-ansible/rocky/reference/ceph-guide.html#using-an-external-journal-drive .
With filestore you need one parition with the label KOLLA_CEPH_OSD_BOOTSTRAP_FOO and a small journal-partition with label KOLLA_CEPH_OSD_BOOTSTRAP_FOO_J (the FOO in the name is really required...). To be able to switch your kolla-installation to filestore OSD type, edit all-in-one file's [storage] section by adding ceph_osd_store_type=filestore next to the host as follows, to override the default bluestore.
[storage]
localhost ansible_connection=local ceph_osd_store_type=filestore
The above method has been tested with ansible==2.9.7 and kolla-ansible==9.1.0 and OpenStack Train release and prior releases.
I have a vagrant configured VirtualBox VM running Ubuntu 14.04. Whenever doing a major mysql import within the box, I notice that the import was incomplete. If I try to restart mysql, this error occurs:
[FAIL] /etc/init.d/mysql: ERROR: The partition with /var/lib/mysql is too full! ... failed!
I resized the VMDK image by cloning to VDI, resizing, and cloning back to VMDK as I had read this might fix the problem.
$ df -h
rootfs 9.1G 8.7G 8.0K 100% /
I'm aware that the /tmp filesystem is mounted separately in Ubuntu, so I even tried changing the tmp directory on mysql.ini and that didn't work either.
$ VBoxManage showhdinfo <guid>
Parent UUID: base
State: created
Type: normal (base)
Storage format: VMDK
Format variant: dynamic default
Capacity: 24000 MBytes
Size on disk: 6894 MBytes
I am running a micro instace in EC2 with 592 MB available RAM
Jenkins was crashing due to Out Of Memory build errors while running UPDATE on big SQL Table in backend.
Disk utilisation is 83% with 6 GB out of 8GB EBS volume used ..
sudo du -hsx * | sort -rh | head -10
/
2.7G opt
1.5G var
1.2G usr
I found only 6 MB was free with command - "free -m " with these services running -
(i) LAMPP
(ii) Jenkins
(iii) Mysql 5.6
I stopped LAMPP and that created 70 MB free space
Then , I closed Jenkins, it created 320 MB free space
Closing MySQL 5.6 brings it up to 390 MB free space ..
So, 200MB RAM is still getting used with none of my services running.
Is 200MB RAM minimum required for an Ubuntu micro Instance running on Amazon EC2 ?
Nope, i believe it can run till its 100% used.
If a task that requires a large memory than what is available, the task is killed.
To free up more memory space, you can run this from your terminal
sudo apt-get autoremove
I'm using Amazon EMR and I'm able to run most jobs fine. I'm running into a problem when I start loading and generating more data within the EMR cluster. The cluster runs out of storage space.
Each data node is a c1.medium instance. According to the links here and here each data node should come with 350GB of instance storage. Through the ElasticMapReduce Slave security group I've been able to verify in my AWS Console that the c1.medium data nodes are running and are instance stores.
When I run hadoop dfsadmin -report on the namenode, each data node has about ~10GB of storage. This is further verified by running df -h
hadoop#domU-xx-xx-xx-xx-xx:~$ df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sda1 9.9G 2.6G 6.8G 28% /
tmpfs 859M 0 859M 0% /lib/init/rw
udev 10M 52K 10M 1% /dev
tmpfs 859M 4.0K 859M 1% /dev/shm
How can I configure my data nodes to launch with the full 350GB storage? Is there a way to do this using a bootstrap action?
After more research and posting on the AWS forum I got a solution although not a full understanding of what happened under the hood. Thought I would post this as an answer if that's okay.
Turns out there is a bug in the AMI Version 2.0, which of course was the version I was trying to use. (I had switched to 2.0 because I wanted hadoop 0.20 to be the default) The bug in AMI Version 2.0 prevents mounting of instance storage on 32-bit instances, which is what the c1.mediums launch as.
By specifying on the CLI tool that the AMI Version should use "latest", the problem was fixed and each c1.medium launched with the appropriate 350GB of storage.
For example
./elastic-mapreduce --create --name "Job" --ami-version "latest" --other-options
More information about using AMIs and "latest" can be found here. Currently "latest" is set to AMI 2.0.4. AMI 2.0.5 is the most recent release but looks like it is also still a little buggy.