Why does this VirtualBox disk not dynamically expand? - vagrant

I have a vagrant configured VirtualBox VM running Ubuntu 14.04. Whenever doing a major mysql import within the box, I notice that the import was incomplete. If I try to restart mysql, this error occurs:
[FAIL] /etc/init.d/mysql: ERROR: The partition with /var/lib/mysql is too full! ... failed!
I resized the VMDK image by cloning to VDI, resizing, and cloning back to VMDK as I had read this might fix the problem.
$ df -h
rootfs 9.1G 8.7G 8.0K 100% /
I'm aware that the /tmp filesystem is mounted separately in Ubuntu, so I even tried changing the tmp directory on mysql.ini and that didn't work either.
$ VBoxManage showhdinfo <guid>
Parent UUID: base
State: created
Type: normal (base)
Storage format: VMDK
Format variant: dynamic default
Capacity: 24000 MBytes
Size on disk: 6894 MBytes

Related

How can I get partition mounted ? "mount: /Volumes/mytest failed with 71"

The exfat partition of my external drive is not mounted automatically on my mac.
I created a /mytest directory in the /Volumes directory and tried to mount it, but I get the following error and cannot mount it.
This works for everything else, but I cannot mount this partition.
How can I get it mounted?
sh-3.2# diskutil list rdisk7
/dev/disk7 (external, physical):
#: TYPE NAME SIZE IDENTIFIER
0: GUID_partition_scheme *8.0 TB disk7
1: Microsoft Reserved 16.8 MB disk7s1
2: Microsoft Basic Data myexfhdtosi 4.4 TB disk7s2
3: Microsoft Basic Data 1.6 TB disk7s3
4: Apple_HFS myhfshddtosi 1000.0 GB disk7s4
sh-3.2# mount -v -t exfat /dev/rdisk7s2 /Volumes/mytest
mount_exfat: /dev/rdisk7s2 on /Volumes/mytest: Block device required
mount: /Volumes/mytest failed with 71
Try killing the fsck service and mount again.
sudo pkill -f fsck
See this thread for detail discussion.

Installing ceph using kolla-ansible for all-in-one setup

I am trying to deploy the all-in-one configuration using kolla-ansible with ceph enabled
enable_ceph: "yes"
#enable_ceph_mds: "no"
enable_ceph_rgw: "yes"
#enable_ceph_nfs: "no"
enable_ceph_dashboard: "{{ enable_ceph | bool }}"
#enable_chrony: "yes"
enable_cinder: "yes"
enable_cinder_backup: "yes"
glance_backend_ceph: "yes"
gnocchi_backend_storage: "{{ 'ceph' if enable_ceph|bool else 'file' }}"
cinder_backend_ceph: "{{ enable_ceph }}"
cinder_backup_driver: "ceph"
nova_backend_ceph: "{{ enable_ceph }}"
And, my setup consists of a Virtual Box VM with Ubuntu 18.04.4 desktop version with 2 CPU cores, 30 GB Disk (single disk), 2GB RAM, the partitioning type is msdos.
ansible version==2.9.7
kolla-ansible version==9.1.0
In order to install ceph OSD using kolla-ansible i read that a partition should have the name KOLLA_CEPH_OSD_BOOTSTRAP_BS.
Hence, i created root partition with 20 GB i.e /dev/sda1 and then an extended partition /dev/sda2 for the rest 20GB and followed by two logical partitions (/dev/sda5 and /dev/sda6) each of 10GB for OSD. But in msdos type partitioning there is no feature to allocate name to partitions.
So my questions are:
How do I go about labeling the partition in case of msdos type partition in order for kolla-ansible to recognize that /dev/sda5 and /dev/sda6 is designated for Ceph-OSD ?
Is it a must to have a separate storage drive than the one containing Operating System for Ceph OSD (i know its not recommended to have all in single disk) ?
How do I have to provision my single drive HD space in order to install Ceph-OSD using kolla-ansible?
P.S: I also tried to install ceph using kolla-ansible using an OpenStack VM (4 CPU cores, 80GB disk space - single drive, as i didn"t install Cinder in my OpenStack infra.) and Ubuntu 18.04.4 Cloud image, which has GPT partition type and supports naming partitions. The partitions were as follows:
/dev/vda1 for root partition
/dev/vda2 for ceph OSD
/dev/vda3 for ceph OSD
But the drawback was that, kolla-ansible wiped out complete disk and resulted in failure in installation.
Any help is highly appreciated. Thanks a lot in advance.
I also had installed an Kolla-Ansible single-node all-in-one with Ceph as storage backend, so I had the same problem.
Yes, the bluestore installation of the ceph doesn't work with a single partition. I had also tried different ways of labeling, but for me it only worked with a whole disk, instead of a partition. So for your virtual setup create a whole new disk, for example /dev/vdb.
For labeling, I used the following as bash-script:
#!/bin/bash
DEV="/dev/vdb"
(
echo g # create GPT partition table
echo n # new partiton
echo # partition number (automatic)
echo # start sector (automatic)
echo +10G # end sector (use 10G size)
echo w # write changes
) | fdisk $DEV
parted $DEV -- name 1 KOLLA_CEPH_OSD_BOOTSTRAP_BS
Be aware, that DEV at the beginning is correctly set for your setup. This creates a new partiton table and one partition on the new disc with 10GB size. The kolla-ansible deploy-run register the label and wipe the whole disc, so the size-value has nothing to say and is only for the temporary partition on the disc.
One single disc is enough for the Ceph-OSD in kolla-ansible. You don't need a second OSD. For this, add the following config-file in your kolla-ansible setup in this path, when you used the default kolla installation path: /etc/kolla/config/ceph.conf with the content:
[global]
osd pool default size = 1
osd pool default min size = 1
This is to make sure, that there is only one OSD requested by kolla-ansible. If your kolla-directory with the globals.yml is not under /etc/kolla/, you have to change the path of the config-file as well.
Solution for setup with one single disc with multiple partitions is to switch the storage-type of the ceph-storage in the kolla-ansible setup from bluestore to the older filestore OSD type. This also requires different partition-labels like written here: https://docs.openstack.org/kolla-ansible/rocky/reference/ceph-guide.html#using-an-external-journal-drive .
With filestore you need one parition with the label KOLLA_CEPH_OSD_BOOTSTRAP_FOO and a small journal-partition with label KOLLA_CEPH_OSD_BOOTSTRAP_FOO_J (the FOO in the name is really required...). To be able to switch your kolla-installation to filestore OSD type, edit all-in-one file's [storage] section by adding ceph_osd_store_type=filestore next to the host as follows, to override the default bluestore.
[storage]
localhost ansible_connection=local ceph_osd_store_type=filestore
The above method has been tested with ansible==2.9.7 and kolla-ansible==9.1.0 and OpenStack Train release and prior releases.

Cloudera installation dfs.datanode.max.locked.memory issue on LXC

I have created virtual box, ubuntu 14.04LTS environment on my mac machine.
In virtual box of ubuntu, I've created cluster of three lxc-containers. One for master and other two nodes for slaves.
On master, I have started installation of CDH5 using following link http://archive.cloudera.com/cm5/installer/latest/cloudera-manager-installer.bin
I have also made necessary changes in the /etc/hosts including FQDN and hostnames. Also created passwordless user named as "ubuntu".
While setting up the CDH5, during installation I'm constantly facing following error on datanodes. Max locked memory size: dfs.datanode.max.locked.memory of 922746880 bytes is more than the datanode's available RLIMIT_MEMLOCK ulimit of 65536 bytes.
Exception in secureMain: java.lang.RuntimeException: Cannot start datanode because the configured max locked memory size (dfs.datanode.max.locked.memory) of 922746880 bytes is more than the datanode's available RLIMIT_MEMLOCK ulimit of 65536 bytes.
at org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:1050)
at org.apache.hadoop.hdfs.server.datanode.DataNode.<init>(DataNode.java:411)
at org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:2297)
at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2184)
at org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:2231)
at org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:2407)
at org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:2431)
Krunal,
This solution will be probably be late for you but maybe it can help somebody else so here it is. Make sure your ulimit is set correctly. But in case its a config issue.
goto:
/run/cloudera-scm-agent/process/
find latest config dir,
in this case:
1016-hdfs-DATANODE
search for parameter in this dir:
grep -rnw . -e "dfs.datanode.max.locked.memory"
./hdfs-site.xml:163: <name>dfs.datanode.max.locked.memory</name>
and edit the value to the one he is expecting in your case(65536)
I solved by opening a seperate tab in Cloudera and set the value from there

How do I access EC2 Instance Storage from Amazon Linux?

I'm launching a fresh Amazon EC2 image running Amazon Linux (Amazon Linux AMI 2014.03.2 (HVM)). I'm using a larger type (e.g. m3.large) which comes with 1 x 32 (SSD) Instance Storage according to the launch page.
However, I'm not able to find that storage anywhere. In documentation that I found it mentioned that it should be listed with lsblk and mounted in /media, but /media is empty and lsblk only gives me the 8GB root disk:
$ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
xvda 202:0 0 8G 0 disk
└─xvda1 202:1 0 8G 0 part /
Anyone an idea how to access my 'default' instance storage that I get with my instance?
I have since received a solution via the AWS Forums. See here: https://forums.aws.amazon.com/thread.jspa?messageID=565737

webhdfs open file NullPointerException

I am trying open a file from HDFS throught the webhdfs API. I can create files and upload them, but once I try to open I get this error
{"RemoteException":{"exception":"NullPointerException","javaClassName":"java.lang.NullPointerException","message":null}}
using the following command
curl -i -X GET "http://ec2-xx-xx-xx-xx.eu-west-1.compute.amazonaws.com:50070/webhdfs/v1/tmp/tmp.txt?op=OPEN"
I tried this from multiple machines (from the master node, or remotely) I get the same error. It's running on CHD4.6.
thanks,
apparently this is a bug in CDH running on Ubuntu 12.04, due to /run mounted with noexec
can be resolved as follows:
sudo mount -o remount,exec /run

Resources