Ubuntu partitioning setup for running docker machines on windows box - windows

I am trying to setup Ubuntu trusty version on my Windows box using Oracle VM for running docket containers. I did install Ubuntu and selected "erase and install new OS+logical partitioning as options while installing. Now, when i try to run docker get the below warning: WARN[0000] Your kernel does not support swap memory limit.
Did some research and seems like this is the answer: https://docs.docker.com/engine/installation/ubuntulinux/ but warns of 10% performance loss (cgroup mounting) and this https://github.com/docker/docker/issues/16298.
Current partitions:
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/ubuntu--vg-root 25G 4.1G 20G 18% /
none 4.0K 0 4.0K 0% /sys/fs/cgroup
udev 3.3G 4.0K 3.3G 1% /dev
tmpfs 672M 956K 671M 1% /run
none 5.0M 0 5.0M 0% /run/lock
none 3.3G 152K 3.3G 1% /run/shm
none 100M 52K 100M 1% /run/user
/dev/sda1 236M 38M 186M 17% /boot
Env: Windows 7 with i5 and 16 gigs memory and HDD SATA 5400 1TB with Oracle VM 5.0.14x.
Question: How do you setup the Ubuntu VM partitions (assuming that is the issue here) so we have all the space available to docker if it wishes to use them? Ubuntu installation obviously does not refer to this and neither does Docker.
Am not much into Ubuntu and new to Docker, so any help will be great. Thanks.

Related

hadoop + how to rebalnce the hdfs

we have HDP cluster version 2.6.5 with 8 data nodes , all machines are installed on rhel 7.6 version
HDP cluster is based amabri platform version - 2.6.1
each data-node ( worker machine ) include two disks and each disk size is 1.8T
when we access the data-node machines we can see differences between the size of the disks
for example on the first data-node the size is : ( by df -h )
/dev/sdb 1.8T 839G 996G 46% /grid/sdc
/dev/sda 1.8T 1014G 821G 56% /grid/sdb
on the second data-node the size is:
/dev/sdb 1.8T 1.5T 390G 79% /grid/sdc
/dev/sda 1.8T 1.5T 400G 79% /grid/sdb
on the third data-node th size is:
/dev/sdb 1.8T 1.7T 170G 91% /grid/sdc
/dev/sda 1.8T 1.7T 169G 91% /grid/sdb
and so on
the big question is why HDFS not perform the re-balance on the HDFS disks?
for example expected results on all disks should be with the same size on all datanodes machines
why is the used size differences between datanode1 to datanode2 to datanode3 etc ?
any advice about the tune parameters in HDFS that can help us?
because its very critical when one disk is reached 100% size and the other are more small as 50%
This is known behaviour of the hdfs re-balancer in HDP 2.6, There are many reasons for unbalanced block distribution. Click to check all the possible reasons.
With HDFS-1312 a disk balance option have been introduced to address this issue.
Following articles shall help you tune it more efficiently:-
HDFS Balancer (1): 100x Performance Improvement
HDFS Balancer (2): Configurations & CLI Options
HDFS Balancer (3): Cluster Balancing Algorithm
I would suggest to upgrade to HDP3.X as HDP 2.x is not supported anymore by Cloudera Support.

Laravel Homestead using Parallels and Vagrant cannot expand more disk size

I'm trying to expand more space for my virtual server (Homestead) using Parallels provider on Macbook.
The default disk size is 18GB
vagrant#homestead:~$ df -h
Filesystem Size Used Avail Use% Mounted on
udev 964M 0 964M 0% /dev
tmpfs 199M 7.7M 192M 4% /run
/dev/mapper/homestead--vg-root 18G 11G 5.9G 65% /
tmpfs 994M 8.0K 994M 1% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 994M 0 994M 0% /sys/fs/cgroup
/dev/mapper/homestead--vg-mysql--master 9.8G 234M 9.1G 3% /homestead-vg/master
10.211.55.2:/Users/orange/code 234G 234G 165G 59% /home/vagrant/code
vagrant 234G 69G 165G 30% /vagrant
tmpfs 199M 0 199M 0% /run/user/1000
I don't know why the default space of VM is 64G but actually Homestead server is just 18GB
☁ homestead-7.pvm prl_disk_tool resize --info --units G --hdd harddisk1.hdd
Operation progress 100 %
Disk information:
SectorSize: 512
Size: 64G
Minimum: 64G
Minimum without resizing the last partition: 64G
Maximum: 2047G
Warning! The last partition cannot be resized because its file system is either not supported or damaged.
Make sure that the virtual HDD is not used by another process.
Warning! The disk image you specified has snapshots.
You need to delete all snapshots using the prlctl command line utility before resizing the disk.
I've so many searched but it still not solve.
How can I solve it?
(Sorry my bad English)
Based on the discussion in the reported issue on GitHub the following command will help:
lvextend -r -l +100%FREE /dev/mapper/homestead--vg-root
There's a commit that should handle the issue, but it wasn't released yet within tagged Vagrant version.
The reason for this whole dance is that Vagrant packages the VirtualBox disk as .vmdk which doesn't have the same resizability options as .vdi does.

Oracle AWR Report file can not be written to the disk (using Oracle Database Server Docker Image)

I am trying to generate an AWR report in Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production (using Oracle Database Server Docker Image).
I am connected to the DB as sysdba.
In sqlplus I run: #$ORACLE_HOME/rdbms/admin/awrrpt.sql, after answering output format(html), starting and ending snapshot, and output file name I got an output:
Using the report name awrrpt_1_1_4.html
SP2-0606: Cannot create SPOOL file "awrrpt_1_1_4.html"
Obviously I can not find the file awrrpt_1_1_4.html on path /u01/app/oracle/product/12.2.0/dbhome_1/rdbms/admin/
I got enough space:
Filesystem Size Used Avail Use% Mounted on
overlay 59G 13G 43G 23% /
tmpfs 64M 0 64M 0% /dev
tmpfs 2.0G 0 2.0G 0% /sys/fs/cgroup
/dev/sda1 59G 13G 43G 23% /ORCL
shm 64M 0 64M 0% /dev/shm
tmpfs 2.0G 0 2.0G 0% /proc/acpi
tmpfs 2.0G 0 2.0G 0% /sys/firmware
For the output directory $ORACLE_HOME/rdbms/admin/ I gave chmod 777, I can also create a file and put some data into it and save it with vi editor.
What can be the problem?
why are you changing permissions of Oracle DB's own directories? Just do it in your home dir
create directory /home/oracle/myWorkspace
then execute your script using spool, and there you go.
Cause: The STORE command was unable to create the specified file. There may be insufficient disk space, too many open files, or read-only protection on the output directory.
Action: Check that there is sufficient disk space and that the protection on the directory allows file creation.

Spark on EC2, no space left on device

I'm running spark job consuming 50GB+, my guess is that shuffle operations written to disk are causing space to run out.
I'm using the current Spark 1.6.0 EC2 script to build my cluster, close to finishing I get this error:
16/03/16 22:11:16 WARN TaskSetManager: Lost task 29948.1 in stage 3.0 (TID 185427, ip-172-31-29-236.ec2.internal): java.io.FileNotFoundException: /mnt/spark/spark-86d64093-d1e0-4f51-b5bc-e7eeffa96e82/executor-b13d39ba-0d17-428d-846a-b1b1f69c0eb6/blockmgr-12c0d9df-3654-4ff8-ba16-8ed36ca68612/29/shuffle_1_29948_0.index.3065f0c8-2511-48ab-8bf0-d0f40ab524ba (No space left on device)
I've tried using various EC2 types, but they all seem to just have the 8GB mounted for / when they start. Doing a df -h doesn't show any other storage mounted for /mnt/spark so does that mean it's only using the little bit of space left?
My df -h:
Filesystem Size Used Avail Use% Mounted on
/dev/xvda1 7.8G 4.1G 3.7G 53% /
devtmpfs 30G 56K 30G 1% /dev
tmpfs 30G 0 30G 0% /dev/shm
How do you expand the disk space? I've created my own AMI for this based off the Amazon default Spark one, because of extra packages I need.

EC2 instance - Unable to use EBS volume

I have EC2 instance(Ubuntu) in Amazon EC2. I have attached 200GB EBS volume to my instance (t2.micro).
The space status of my instance as per df -h command are:
/dev/xvda1 7.8G 7.7G 0 100% /
none 4.0K 0 4.0K 0% /sys/fs/cgroup
udev 492M 12K 492M 1% /dev
tmpfs 100M 364K 99M 1% /run
none 5.0M 0 5.0M 0% /run/lock
none 497M 0 497M 0% /run/shm
none 100M 0 100M 0% /run/user
/dev/xvdf 197G 7.9G 179G 5% /data
It clearly telling /dev/xvda1 is full. But my instance not using attached EBS volume, hence my application server is not running. Please help me to resolve this issues.
Note: I have tried to swap (8GB) from EBS to my instance so that the device /dev/xvdf telling 5% used. But it never used by my instance.
/dev/xvda1 seems to be your bootvolume, this usually defaults to 8GB when you start a new linux instance unless you change the default value.
/dev/xvdf is an additional 200GB volume you have attached to the instance, this is different to the boot volume.
The EC2 instance will not automatically use the 200GB non-boot volume. You have to use it-- mount it, format it, move files off the boot volume to this volume if you want.
If you'd rather have a larger boot volume see: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-expand-volume.html

Resources