mounting sdb on /mnt using docker and openfoam - bash

I recently found the post entitled "How to mount volumes in docker release of openFOAM" post on this site on October 2016. That post asks about automatically
mounting an already mounted (under bash or csh) volume through the Docker version of openfoam. Hopefully, this is explained below.
I have the situation that under csh, the output from lsblk is:
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sdb 8:16 0 1.8T 0 disk /mnt/hdd
sda 8:0 0 111.8G 0 disk
├─sda2 8:2 0 1K 0 part
├─sda5 8:5 0 7.9G 0 part [SWAP]
└─sda1 8:1 0 103.9G 0 part /
Then I run the script startOpenFOAM+, which is the following Bash shell script:
#!/bin/bash
# this script will
# i) Start OpenFOAM+ container with name 'of_v1612_plus'
# in the the shell-terminal.
# User also need to run xhost+ from other terminal
# Note: Docker daemon should be running before launching script
# PostProcessing: User can launch paraview/paraFoam from terminal
# to postprocess the results
# Note: user can launch script in different shell to have OpenFOAM
# working environment in different terminal
xhost +local:of_v1612_plus
docker start of_v1612_plus
docker exec -it of_v1612_plus /bin/bash -rcfile /opt/OpenFOAM/setImage_v1612+
I am dumped into a Bash shell and the output from lsblk is now:
bash-4.1$ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sdb 8:16 0 1.8T 0 disk
sda 8:0 0 111.8G 0 disk
|-sda2 8:2 0 1K 0 part
|-sda5 8:5 0 7.9G 0 part [SWAP]
`-sda1 8:1 0 103.9G 0 part /etc/sudoers.d
I guess the answer to the problem is to add the line docker run -v .... into the startOpenFOAM+ shell script. However, I am not sure what to replace the dots with and where to place the command.
Any help would be much appreciated.
Thanks,
Peter.

If I understand, you need this:
docker run -v /mnt/hdd:/mnt/hdd .....
But you didn't showed where you docker run, if you find it, then add that -v.
Important: you will not see a mount point with lsblk inside container regarding sdb, because docker mount a directory, not a device. You just will see contents in /mnt/hdd

Related

Arch Linux Installation: ERROR: Root device mounted successfully, but /sbin/init does not exist

I'm fairly new to linux but decided to dive right in with arch-linux to become familiar with everything.
Unfortunatelly I can't even finish the installation - shame on me.
The error while booting after setting arch up is:
ERROR: Root device mounted successfully, but /sbin/init does not exist.
Bailing out, you are on your own. Good luck.
I went for btrfs on luks on lvm
The layout looks like this
sda
|- sda1 512MB fat32 /boot
`- sda2 remaining lvm
|- cryptswap 4GB swap
|- crypttmp 2GB tmp /tmp
`- cryptroot remaining btrfs
|- # /
|- #home /home
|- #snapshots /.snapshots
|- #log /var/log
|- #cache /var/cache
`- #tmp /var/tmp
Those are the commands and configurations I used to setup arch:
dd status=progress if=/dev/zero of=/dev/sda
wipe disk
gdisk /dev/sda
o clear gpt table
boot partition
n
↵
↵
+512M
ef00
lvm partition
n
↵
↵
↵
8e00
w write partition changes
setup lvm
pvcreate /dev/sda2
vgcreate vg1 /dev/sda2
lvcreate -L 4G -n cryptswap vg1
lvcreate -L 2G -n crypttmp vg1
lvcreate -l 100%FREE cryptroot vg1
setup encryption
cryptsetup luksFormat /dev/vg1/cryptroot
cryptsetup open /dev/vg1/cryptroot root
make filesystems
mkfs.fat -F32 -n BOOT /dev/sda1
mkfs.btrfs --label ROOT /dev/mapper/root
create btrfs subvolumes
mount /dev/mapper/root /mnt
cd /mnt
btrfs subvolume create #
btrfs subvolume create #home
btrfs subvolume create #snapshots
btrfs subvolume create #log
btrfs subvolume create #cache
btrfs subvolume create #tmp
cd ..
umount /mnt
mount btrfs subvolumes and BOOT partition
mount -o noatime,compress=lzo,space_cache=v2,discard=async,subvol=# /dev/mapper/root /mnt
mkdir /mnt/home
mount -o noatime,compress=lzo,space_cache=v2,discard=async,subvol=#home /dev/mapper/root /mnt/home
mkdir /mnt/.snapshots
mount -o noatime,compress=lzo,space_cache=v2,discard=async,subvol=#snapshots /dev/mapper/root /mnt/.snapshots
mkdir /mnt/var
mkdir /mnt/var/log
mount -o noatime,compress=lzo,space_cache=v2,discard=async,subvol=#log /dev/mapper/root /mnt/var/log
mkdir /mnt/var/cache
mount -o noatime,compress=lzo,space_cache=v2,discard=async,subvol=#cache /dev/mapper/root /mnt/var/cache
mkdir /mnt/var/tmp
mount -o noatime,compress=lzo,space_cache=v2,discard=async,subvol=#tmp /dev/mapper/root /mnt/var/tmp
mkdir /mnt/boot
mount /dev/sda1 /mnt/boot
pacstrap /mnt base linux linux-firmware lvm2 btrfs-progs amd-ucode vim install necessities
genfstab -L /mnt > mnt/etc/fstab generate fstab
arch-chroot /mnt
basic configuration
ln -sf /usr/share/zoneinfo/Europe/Berlin /etc/localtime
hwclock --systohc
vim /etc/locale.gen
locale-gen
echo "LANG=en_US.UTF-8" >> /etc/locale.conf
echo "KEYMAP=de-latin1" >> /etc/vconsole.conf
echo "devstation" >> /etc/hostname
vim /etc/hosts
vim /etc/mkinitcpio.conf
the mkinitcpio.conf content:
MODULES=(btrfs)
HOOKS=(base udev autodetect keyboard keymap consolefont modconf block lvm2 encrypt filesystems fsck)
mkinitcpio -p linux
bootctl install
echo "default arch" > /boot/loader/loader.conf
vim /boot/loader/entries/arch.conf
the arch.conf content
title Arch Linux
linux /vmlinuz-linux
initrd /initramfs-linux.img
options cryptdevice=UUID={/dev/vg1/cryptroot uuid inserted here}:root root=/dev/mapper/root rw
exit
umount -a
poweroff
Pulling the arch installation medium out of the computer and starting it.
The booting output
:: running early hook [udev]
Starting version 248.3-2-arch
:: running hook [udev]
:: Triggering uevents...
:: running hook [keymap]
:: Loading keymap...done.
:: running hook [encrypt]
A password is requires to acces the root volume:
Enter passphrase for /dev/mapper/vg1-cryptroot: {inserting passphrase}
:: performing fsck on '/dev/mapper/root'
:: mounting '/dev/mapper/root' on real root
:: running cleanup hook [udev]
ERROR: Root device mounted successfully, but /sbin/init does not exist.
Bailing out, you are on your own. Good luck.
sh: can't access tty; job control turned off
[rootfs ]#
Obviously I didnt setup cryptswap and crypttmp, yet. Those will be setup with crypttab and fstab. I am just mentioning this, and highly doubt it is part of the problem, because they are just partitions not recognized by anything at the moment, aren't they.
I hope I didn't miss any command or configuration I did - I am typing off videos I watched and from head, because no single video I found had the btrfs, luks, lvm config I went with. Thanks for your time/help and reading this through.
Adding rootflags=subvol=# to /boot/loader/entries/arch.conf like so
title Arch Linux
linux /vmlinuz-linux
initrd /initramfs-linux.img
options cryptdevice=UUID={/dev/vg1/cryptroot uuid inserted here}:root root=/dev/mapper/root rootflags=subvol=# rw
did the trick.

ddrescue on CygWin creates a zero size image

I try to create an image of a disk (USB flash key) in CygWin with use of the ddrescue command. I do the following:
first, with the command df I look where the disks are in CygWin.
Τhe output is:
C: 30716276 30489824 226452 100% /cygdrive/c
D: 56323856 55794432 529424 100% /cygdrive/d
F: 1953480700 1927260140 26220560 99% /cygdrive/f
H: 7847904 140324 7707580 2% /cygdrive/h
Then, to create the image of the disk h:/ I run the command like this:
ddrescue -v -n /cygdrive/h f:/___buffer/discoH.img discoH.log
The program works some time and is likely reading the disk. Αs a result, the file
f:/___buffer/discoH.img is really created but
its size is zero!
I tried some variations of the command options but with the same result. The disk to be read is fully working and readable, now I only want to learn to create its image.
When using ddrescue under true Linux (Ubuntu), the non-zero-size image of the same disk is created without any problem. What could be the cause for the fail in CygWin?
I still work in Windows XP SP3 32bit, the version of CygWin is
$ uname -r
2.0.4(0.287/5/3)
$ uname -m
i686 (32bit)
On another computer, with Windows 8, the result is the same. Probably, I lack doing something elementary?
PS the disk I want to image is 8GB, and there are 26GB of free space on the disk f:/ where I want to create the image
/cygdrive/h is not a disk image. Try with /dev/sdX
You can identify the X letter, from
$ cat /proc/partitions
major minor #blocks name win-mounts
8 0 976762584 sda
8 1 960658432 sda1 D:\
8 2 16102400 sda2 E:\
8 16 250059096 sdb
8 17 266240 sdb1
8 18 16384 sdb2
8 19 248765440 sdb3 C:\
8 20 1003520 sdb4
Thank you matzeri ! Yours was really the elementary thing I needed but did not know.
So, I use the command cat /proc/partitions instead of df, get the disk reference
sdc1 instead of /cygdrive/h, run the command
ddrescue -v -n /dev/sdc1 f:/___buffer/discoH.img discoH.log
instead of the one I indicated above in my question text, and it works! The image is being recorded

UBUNTU 14.04/16.04 Server Boot: Dev Mount failed, falls to initramfs

When booting a virtual server with Ubuntu 14.04/16.04 (I had the issues with both), it cant find the boot partition for root and the system falls to the initramfs shell with the following error:
(initframs) exit
Gave up waiting for root device. Common problems:
- Boot args (cat proc/cmdline)
- Check rootdelay= (did the system wait long enough?)
- Check root= (did the system wait for the right device?)
- Missing modules (cat /proc/modules; ls /dev)
ALERT! /dev/mapper/CAC_VG-CAC_LV does not exist. Dropping to a shell!
if I type
ls /dev/mapper/
I still can see the partition mentioned in the error (and in the GRUB)
root=/dev/mapper/CAC_VG-CAC_LV
cat output as suggested in the error message
(initframs) cat /proc/cmdline
BOOT_IMAGE=/vmlinuz-4.4.0-66-generic root=/dev/mapper/CAC_VG-CAC_LV ro
Notice: it seems to mount the device in Read-Only (ro). Maybe I should change this after I manage to start the system...
If I type exit I get the same error as above.
Then I try to mount:
mount -t ext4 /dev/mapper/CAC_VG-CAC_LV
mount: can't find /dev/mapper/CAC_VG-CAC_LV in /etc/fstab`
I had the same problem after a fresh install of Ubuntu 14.04
And this actually worked!!
mount -o remount, rw /
lvm vgscan
lvm vgchange -a y
mount -t ext4 /dev/mapper/CAC_VG-CAC_LV /root
exit

Run a bash script at instance startup but not at reboot

For some instance types in AWS, there is instance store 0 available which you can select during launch instance wizard at 'add storage'. I found that this instance store 0 needs to be reformatted and remounted every time instance starts. However, once mounted, the mount remains intact after a reboot but not for instance start. I execute following code via /etc/rc.local which basically serves the purpose.
#!/usr/bin/bash
FSTYPE=xfs
DEVICE=/dev/xvdb
mkfs -t $FSTYPE $DEVICE
umount -f /mnt > /dev/null 2>&1
mount $DEVICE /mnt
chmod 777 /mnt
chmod +t /mnt
exit
But rc.local gets executed at reboot as well as during startup. Is there an elaborate way in Centos 7 wherein this script runs only during instance startup and not during reboots?

How to search for a file or directory in Linux Ubuntu machine

I created an EC2 instance (Ubuntu 64 bit) and attached a volume from a publicly available snapshot to the instance. I successfully mounted the volume. I am supposed to be able to run a script from this attached volume using the following steps as explained in the tutorial:
Log in to your virtual machine.
mkdir /space
mount /dev/sdf1 /space
cd /space
./setup-script
The problem is that, when I try: ./setup-script I got the following message:
-bash: ./setup-script: No such file or directory
What is the problem ? How can I search for the ./setup-script in the whole machine ? I'm not very familiar with linux system. Please, help.
For more details about the issue: Look at my previous post:
Error when mounting drive
# Is it a script or an executable ?
file /space/setup-script
# Show us it is readable and marked executable
ls -l /space/setup-script
# Mark it executable
chmod a+x /space/setup-script
# Then try running it again? If you know it is shell script you can:
bash /space/setup-script
If still not working, then we get into why it wont execute.
grep space /proc/mounts
Does the options it have noexec ?
Try mount -o remount,exec /space now try your instructions again.
NOTE: All commands presume you are 'root' user or you can 'sudo' each command.
It is possible that you have mounted the wrong device. I've just recalled a trick you can use to find the device name of an EBS volume in Linux, since it is often different from the device name reported in the AWS console. First unmount the device in Linux, then detach it from the instance using the AWS console, so we go back to the original state. Now run this command in Linux:
cat /proc/partitions
The command will show the volumes currently attached. The next step is to attach the volume to the instance using the AWS console, and then to run that same command again in Linux. You should see an additional line appear. This line will tell you the name of the device to mount. For example, I get this output in my Ubuntu instance:
major minor #blocks name
202 1 8388608 xvda1
202 80 8388608 xvdf
The first line was already there before I attached the volume, so I know this is my root volume. The second line is the one that appeared, so in this case, the device to mount would be /dev/xvdf.

Resources