Gluster strange issue with shared mount point like seprate mount. - cluster-computing

I have two nodes and for experiment i have install glusterfs and create volume and successfully mounted on own node, but if i create file in node1 it is not showing in node2, look like both behaving like they are separate.
node1
10.101.140.10:/nova-gluster-vol
2.0G 820M 1.2G 41% /mnt
node2
10.101.140.10:/nova-gluster-vol
2.0G 33M 2.0G 2% /mnt
volume info split brian
$ sudo gluster volume heal nova-gluster-vol info split-brain
Gathering Heal info on volume nova-gluster-vol has been successful
Brick 10.101.140.10:/brick1/sdb
Number of entries: 0
Brick 10.101.140.20:/brick1/sdb
Number of entries: 0
test
node1
$ echo "TEST" > /mnt/node1
$ ls -l /mnt/node1
-rw-r--r-- 1 root root 5 Oct 27 17:47 /mnt/node1
node2 (file isn't there, while they are shared mount)
$ ls -l /mnt/node1
ls: cannot access /mnt/node1: No such file or directory
What i am missing??

Iptable solved my problem
iptables -A INPUT -m state --state NEW -m tcp -p tcp --dport 49152 -j ACCEPT

Related

How to mount Nvm EBS volumes of different size on desired mount point using shell

After adding volumes to an ec2 instance using ansible. How can I mount these devices by size to desired mount point using a shell script that I will pass to user_data
[ec2-user#xxx ~]$ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
nvme0n1 259:4 0 200G 0 disk
├─nvme0n1p1 259:5 0 1M 0 part
└─nvme0n1p2 259:6 0 200G 0 part /
nvme1n1 259:0 0 70G 0 disk
nvme2n1 259:1 0 70G 0 disk
nvme3n1 259:3 0 70G 0 disk
nvme4n1 259:2 0 20G 0 disk
This is what i wrote initially but realized the NAME and SIZE are not same always for nvm's
#!/bin/bash
VOLUMES=(nvme1n1 nvme2n1 nvme3n1 nvme4n1)
PATHS=(/abc/sfw /abc/hadoop /abc/log /kafka/data/sda)
for index in ${!VOLUMES[*]}; do
sudo mkfs -t xfs /dev/"${VOLUMES[$index]}"
sudo mkdir -p "${PATHS[$index]}"
sudo mount /dev/"${VOLUMES[$index]}" "${PATHS[$index]}"
echo "Mounted ${VOLUMES[$index]} in ${PATHS[$index]}"
done
I am creating these using ansible and want the 20G to be mounted on /edw/logs but 20G randomly goes on any device. (nvme1n1 or nvme2n1 or nvme3n1 or nvme4n1)
How to write/modify my script?

bash script to edit file using sed doesn't persist reboot

I've written a script to migrate the boot directory from a microSD card to a USB drive. This is for a raspberry pi 4 project. As part of that script there are two files that get updated to remove the microSD reference and use UUID references of the USB thumb drive. The script is run as sudo. The script for updating the /boot/cmdline.txt and /etc/fstab files are as follows:
#!/bin/bash
fCMDLINE=/boot/cmdline.txt
fFSTAB=/etc/fstab
cat $fCMDLINE
sed -i -r -e 's/PARTUUID=([a-z]\S*)/PARTUUID='"$vPARTUUID"'/g' $fCMDLINE
cat $fCMDLINE
cat $fFSTAB
sed -i -r -e 's/PARTUUID=([a-z]\S*)/\/dev\/disk\/by-uuid\/'"$vUUID"'/g' $fFSTAB
cat $fFSTAB
After running the entire script the pre & post files are as follows:
CMDLINE FILE
#ORIGINAL
console=serial0,115200 console=tty1 root=PARTUUID=d9b3f436-02 rootfstype=ext4 elevator=deadline fsck.repair=yes rootwait quiet splash plymouth.ignore-serial-consoles
#UPDATED
console=serial0,115200 console=tty1 root=PARTUUID=0b1e4c33-0a73-4c26-aad2-03c1b5fd9266 rootfstype=ext4 elevator=deadline fsck.repair=yes rootwait quiet splash plymouth.ignore-serial-consoles
FSTAB FILE
#ORIGINAL
proc /proc proc defaults 0 0
PARTUUID=d9b3f436-01 /boot vfat defaults 0 2
PARTUUID=d9b3f436-02 / ext4 defaults,noatime 0 1
# a swapfile is not a swap partition, no line here
# use dphys-swapfile swap[on|off] for that
#UPDATED
proc /proc proc defaults 0 0
/dev/disk/by-uuid/5bee13fa-5c62-45b0-91ed-12c544d4b528 /boot vfat defaults 0 2
/dev/disk/by-uuid/5bee13fa-5c62-45b0-91ed-12c544d4b528 / ext4 defaults,noatime 0 1
# a swapfile is not a swap partition, no line here
# use dphys-swapfile swap[on|off] for that
The output for both files is correct and the sed command appears to have executed as expected on the replacement of the UUIDs. But, when I reboot the raspberry pi the original fstab file is the one that persists and it isn't updated. This only happens after the reboot. Prior to that the files are correct and everything is mounted as expected.
#After Reboot
pi#raspberrypi:~ $ sudo cat /etc/fstab
proc /proc proc defaults 0 0
PARTUUID=d9b3f436-01 /boot vfat defaults 0 2
PARTUUID=d9b3f436-02 / ext4 defaults,noatime 0 1
# a swapfile is not a swap partition, no line here
# use dphys-swapfile swap[on|off] for that
Not that it should matter, but here are the two files with permissions.
/boot/
-rwxr-xr-x 1 root root 191 Dec 8 01:01 cmdline.txt
/etc/
-rw-r--r-- 1 root root 314 Sep 26 01:31 fstab
The expected behavior would be for the fstab file to persist a reboot the same way the cmdline.txt file persists. There must be something obviously that I'm just missing. Any thoughts?
update...
I hadn't noticed before, but the two files above don't have the same date - - The 9/26 file is the original from the Buster (9/26) image and didn't update. Before reboot the file is:
-rw-r--r-- 1 root root 382 Dec 8 18:16 fstab
Baffling - it makes me think it's a hardware issue or a much deeper OS bug.

Can't make hard link for mounted host file in LXD container

I configed a host directory as a disk device in an unprivileged LXD container, like /opt/app/var, and I created a backup directory on container self filesystem, like /backup.
I used rsync to backup /opt/app/var files to /backup with hard link, but I got Invalid cross-device link
lxd container device config:
devices:
var:
path: /opt/app/var
source: /opt/app/var
type: disk
in container:
$ cat /proc/mounts | grep opt
/dev/sda2 /opt/app/var ext4 rw,relatime,stripe=64,data=ordered 0 0
$ cat /proc/mounts | grep "/ "
/dev/sda2 / ext4 rw,relatime,stripe=64,data=ordered 0 0
$ cat /etc/fstab
LABEL=cloudimg-rootfs / ext4 defaults 0 0
I found the mountpoint made by lxd is from /dev/sda2, and the root partition mountpoint is from /dev/sda2 too, so it should be on a same device.
This is not a container issue.
You cannot create hard links across mount points, even when it’s the same device you (bind) mounted to different places in your FS hierarchy.
Try this on your system:
> cd /tmp/
> mkdir bar
> mkdir barm1
> mkdir barm2
> sudo mount --bind bar barm1
> sudo mount --bind bar barm2
> cd barm1
> echo foo > foo
> ll ../barm2/
drwxr-xr-x 2 user users 4096 Jul 13 15:56 ./
drwxrwxrwt. 19 root root 147456 Jul 13 15:57 ../
-rw-r--r-- 1 user users 4 Jul 13 15:56 foo
> cp --link foo ../barm2/foo2
cp: cannot create hard link '../barm2/foo2' to 'foo': Invalid cross-device link

Expanding HDFS memory in Cloudera QuickStart on docker

I try to use the Cloudera QuickStart Docker Image, but it seems that there is no free space on hdfs (0 Bytes).
After starting the Container
docker run --hostname=$HOSTNAME -p 80:80 -p 7180:7180 -p 8032:8032 -p 8030:8030 -p 8888:8888 -p 8983:8983 -p 50070:50070 -p 50090:50090 -p 50075:50075 -p 50030:50030 -p 50060:50060 -p 60010:60010 -p 60030:60030 -p 9095:9095 -p 8020:8020 -p 8088:8088 -p 4040:4040 -p 18088:18088 -p 10020:10020 --privileged=true -t -i cloudera/quickstart /usr/bin/docker-quickstart
I can start the Cloudera Manager
$/home/cloudera/cloudera-manager --express
and to log into the web gui.
Here I can see that dfs.datanode.data.dir is the default /var/lib/hadoop-hdfs/cache/hdfs/dfs/data
On the console my hdfs dfsadmin -report gives me:
hdfs dfsadmin -report
Safe mode is ON
Configured Capacity: 0 (0 B)
Present Capacity: 0 (0 B)
DFS Remaining: 0 (0 B)
DFS Used: 0 (0 B)
DFS Used%: NaN%
Under replicated blocks: 0
Blocks with corrupt replicas: 0
Missing blocks: 0
Missing blocks (with replication factor 1): 0
But when I look at the the container
df -h
Filesystem Size Used Avail Use% Mounted on
overlay 63G 8.3G 52G 14% /
tmpfs 64M 0 64M 0% /dev
tmpfs 5.9G 0 5.9G 0% /sys/fs/cgroup
/dev/sda1 63G 8.3G 52G 14% /etc/resolv.conf
/dev/sda1 63G 8.3G 52G 14% /etc/hostname
/dev/sda1 63G 8.3G 52G 14% /etc/hosts
shm 64M 0 64M 0% /dev/shm
cm_processes 5.9G 7.8M 5.9G 1% /var/run/cloudera-scm-agent/process
What I have to do to add additional space to hfs?
Here I can see that dfs.datanode.data.dir is the default /var/lib/hadoop-hdfs/cache/hdfs/dfs/data
You can use a volume mount into that directory.
More importantly, running df within a container is misleading, and on Mac or Windows, the Docker qcow2 file is only a limited size to begin with.
How do you get around the size limitation of Docker.qcow2 in the Docker for Mac?
It looks like you have no datanodes running

Use parted into a Chef recipe to build a partition

Just starting on chef, I'm trying to convert my old bash provisioning scripts to something more modern and realiable, using Chef.
The first script is somewhat i used to build a partition and mount it to /opt.
This is the script: https://github.com/theclue/db2-vagrant/blob/master/provision_for_mount_disk.sh
#!/bin/bash
yum install -y parted
parted /dev/sdb mklabel msdos
parted /dev/sdb mkpart primary ext4 0% 100%
sleep 3
#-m swith tells mkfs to only reserve 1% of the blocks for the super block
mkfs.ext4 /dev/sdb1
e2label /dev/sdb1 "opt"
######### mount sdb1 to /opt ##############
chmod 777 /opt
mount /dev/sdb1 /opt
chmod 777 /opt
echo '/dev/sdb1 /opt ext4 defaults 0 0' >> /etc/fstab
I found a parted recipe here, but it doesn't seem to support all the parameters I need (0% and 100%, to name two) and anyway I've no idea on how to do the formatting/mounting block.
any idea?

Resources