Is there a way to move the location of ICP private image registry of an installed ICP? - ibm-cloud-private

I am running low on diskspace on the drive where the ICP private image registry is located. I know it's possible to specify a location for "/var/lib/registry" using a bind mount BEFORE ICP installation. Is there a procedure to safely move the location of ICP private image registry on an existing ICP cluster?

How to move /current/directory to another partition/disk
Once you have added extra disk, partitioned it, created the filesystem, and are ready to move /current/directory to it; take the following steps:
Assuming /dev/xvdc11 is the created partition with ext4 file system
# create a temp mount point and mount your new partition
mkdir /mnt/dirName
mount /dev/xvdc11 /mnt/dirName
# confirm that it is mounted.
df -h
# copy current directory content to the temp mount point
rsync -aqxP /current/directory/* /mnt/dirName
# use bind mount to set the new location
mount --rbind /mnt/dirName /dirName
# unmount and delete the temp mount point
umount /mnt/dirName
rm -rf /mnt/dirName
# persist this change across system reboots
echo "/dev/xvdc11 /current/directory ext4 defaults 0 0" >> /etc/fstab

Yes, you can do this activity, please look at for more details on image management:
https://www.ibm.com/support/knowledgecenter/en/SSBS6K_3.1.2/manage_images/image_manager.html
Also it is possible to complete this task by using the API
https://www.ibm.com/support/knowledgecenter/en/SSBS6K_3.1.2/apis/image_management.html

Related

Can I name a VeraCrypt volume on Mac OS?

When Veracrypt 1.23 mounts a volume it is name NO NAME.
Is there a way to give these volumes a name?
I am using the console to create my containers
veracrypt -t -c $LOCATION --encryption=AES --hash=SHA-512 --filesystem=FAT --password=$PASSWORD --size=1G --volume-type=Normal --pim=$PIM --keyfiles=
I tried renaming the volume in /Volumes/NO\ NAME but that just removes the volume from the desktop.
And specifying a mount point.
Enter mount directory [default]: /Users/Test
But the volume still mounts as NO NAME
Using diskutil I can rename volumes, as below.
/usr/sbin/diskutil rename "NO NAME" "TEST2"
I am leaving the question open as this is a bit of a hack.

EC2 Can't resize volume after increasing size

I have followed the steps for resizing an EC2 volume
Stopped the instance
Took a snapshot of the current volume
Created a new volume out of the previous snapshot with a bigger size in the same region
Deattached the old volume from the instance
Attached the new volume to the instance at the same mount point
Old volume was 5GB and the one I created is 100GB
Now, when i restart the instance and run df -h I still see this
Filesystem Size Used Avail Use% Mounted on
/dev/xvde1 4.7G 3.5G 1021M 78% /
tmpfs 296M 0 296M 0% /dev/shm
This is what I get when running
sudo resize2fs /dev/xvde1
The filesystem is already 1247037 blocks long. Nothing to do!
If I run cat /proc/partitions I see
202 64 104857600 xvde
202 65 4988151 xvde1
202 66 249007 xvde2
From what I understand if I have followed the right steps xvde should have the same data as xvde1 but I don't know how to use it
How can I use the new volume or umount xvde1 and mount xvde instead?
I cannot understand what I am doing wrong
I also tried sudo ifs_growfs /dev/xvde1
xfs_growfs: /dev/xvde1 is not a mounted XFS filesystem
By the way, this a linux box with centos 6.2 x86_64
There's no need to stop instance and detach EBS volume to resize it anymore!
13-Feb-2017 Amazon announced: "Amazon EBS Update – New Elastic Volumes Change Everything"
The process works even if the volume to extend is the root volume of running instance!
Say we want to increase boot drive of Ubuntu from 8G up to 16G "on-the-fly".
step-1) login into AWS web console -> EBS -> right mouse click on the one you wish to resize -> "Modify Volume" -> change "Size" field and click [Modify] button
step-2) ssh into the instance and resize the partition:
let's list block devices attached to our box:
lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
xvda 202:0 0 16G 0 disk
└─xvda1 202:1 0 8G 0 part /
As you can see /dev/xvda1 is still 8 GiB partition on a 16 GiB device and there are no other partitions on the volume.
Let's use "growpart" to resize 8G partition up to 16G:
# install "cloud-guest-utils" if it is not installed already
apt install cloud-guest-utils
# resize partition
growpart /dev/xvda 1
Let's check the result (you can see /dev/xvda1 is now 16G):
lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
xvda 202:0 0 16G 0 disk
└─xvda1 202:1 0 16G 0 part /
Lots of SO answers suggest to use fdisk with delete / recreate partitions, which is nasty, risky, error-prone process especially when we change boot drive.
step-3) resize file system to grow all the way to fully use new partition space
# Check before resizing ("Avail" shows 1.1G):
df -h
Filesystem Size Used Avail Use% Mounted on
/dev/xvda1 7.8G 6.3G 1.1G 86% /
# resize filesystem
resize2fs /dev/xvda1
# Check after resizing ("Avail" now shows 8.7G!-):
df -h
Filesystem Size Used Avail Use% Mounted on
/dev/xvda1 16G 6.3G 8.7G 42% /
So we have zero downtime and lots of new space to use.
Enjoy!
Update: Update: Use sudo xfs_growfs /dev/xvda1 instead of resize2fs when XFS filesystem.
Thank you Wilman your commands worked correctly, small improvement need to be considered if we are increasing EBSs into larger sizes
Stop the instance
Create a snapshot from the volume
Create a new volume based on the snapshot increasing the size
Check and remember the current's volume mount point (i.e. /dev/sda1)
Detach current volume
Attach the recently created volume to the instance, setting the exact mount point
Restart the instance
Access via SSH to the instance and run fdisk /dev/xvde
WARNING: DOS-compatible mode is deprecated. It's strongly recommended to
switch off the mode (command 'c') and change display units to
sectors (command 'u')
Hit p to show current partitions
Hit d to delete current partitions (if there are more than one, you have to delete one at a time) NOTE: Don't worry data is not lost
Hit n to create a new partition
Hit p to set it as primary
Hit 1 to set the first cylinder
Set the desired new space (if empty the whole space is reserved)
Hit a to make it bootable
Hit 1 and w to write changes
Reboot instance OR use partprobe (from the parted package) to tell the kernel about the new partition table
Log via SSH and run resize2fs /dev/xvde1
Finally check the new space running df -h
Prefect comment by jperelli above.
I faced same issue today. AWS documentation does not clearly mention growpart. I figured out the hard way and indeed the two commands worked perfectly on M4.large & M4.xlarge with Ubuntu
sudo growpart /dev/xvda 1
sudo resize2fs /dev/xvda1
[SOLVED]
This is what it had to be done
Stop the instance
Create a snapshot from the volume
Create a new volume based on the snapshot increasing the size
Check and remember the current's volume mount point (i.e. /dev/sda1)
Detach current volume
Attach the recently created volume to the instance, setting the exact mount point
Restart the instance
Access via SSH to the instance and run fdisk /dev/xvde
Hit p to show current partitions
Hit d to delete current partitions (if there are more than one, you have to delete one at a time) NOTE: Don't worry data is not lost
Hit n to create a new partition
Hit p to set it as primary
Hit 1 to set the first cylinder
Set the desired new space (if empty the whole space is reserved)
Hit a to make it bootable
Hit 1 and w to write changes
Reboot instance
Log via SSH and run resize2fs /dev/xvde1
Finally check the new space running df -h
This is it
Good luck!
This will work for xfs file system just run this command
xfs_growfs /
login into AWS web console -> EBS -> right mouse click on the one you wish to resize -> "Modify Volume" -> change "Size" field and click [Modify] button
growpart /dev/xvda 1
resize2fs /dev/xvda1
This is a cut-to-the-chase version of Dmitry Shevkoplyas' answer. AWS documentation does not show the growpart command. This works ok for ubuntu AMI.
sudo growpart /dev/xvda 1
sudo resize2fs /dev/xvda1
the above two commands saved my time for AWS ubuntu ec2 instances.
Once you modify the size of your EBS,
List the block devices
sudo lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
nvme0n1 259:2 0 10G 0 disk
|-nvme0n1p1 259:3 0 1M 0 part
`-nvme0n1p2 259:4 0 10G 0 part /
Expand the partition
Suppose you want to extend the second partition mounted on /,
sudo growpart /dev/nvme0n1 2
If all your space is used up in the root volume and basically you're not able to access /tmp i.e. with error message Unable to growpart because no space left,
temporarily mount a /tmp volume: sudo mount -o size=10M,rw,nodev,nosuid -t tmpfs tmpfs /tmp
unmount after the complete resize is done: sudo umount -l /tmp
Verify the new size
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
nvme0n1 259:2 0 20G 0 disk
|-nvme0n1p1 259:3 0 1M 0 part
`-nvme0n1p2 259:4 0 10G 0 part /
Resize the file-system
For XFS (use the mount point as argument)
sudo xfs_growfs /
For EXT4 (use the partition name as argument)
sudo resize2fs /dev/nvme0n1p2
Just in case if anyone here for GCP google cloud platform , Try this:
sudo growpart /dev/sdb 1
sudo resize2fs /dev/sdb1
So in Case anyone had the issue where they ran into this issue with 100% use , and no space to even run growpart command (because it creates a file in /tmp)
Here is a command that i found that bypasses even while the EBS volume is being used , and also if you have no space left on your ec2 , and you are at 100%
/sbin/parted ---pretend-input-tty /dev/xvda resizepart 1 yes 100%
see this site here:
https://www.elastic.co/blog/autoresize-ebs-root-volume-on-aws-amis
Did you make a partition on this volume? If you did, you will need to grow the partition first.
Thanks, #Dimitry, it worked like a charm with a small change to match my file system.
source: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-expand-volume.html#recognize-expanded-volume-linux
Then use the following command, substituting the mount point of the filesystem (XFS file systems must be mounted to resize them):
[ec2-user ~]$ sudo xfs_growfs -d /mnt
meta-data=/dev/xvdf isize=256 agcount=4, agsize=65536 blks
= sectsz=512 attr=2
data = bsize=4096 blocks=262144, imaxpct=25
= sunit=0 swidth=0 blks
naming =version 2 bsize=4096 ascii-ci=0
log =internal bsize=4096 blocks=2560, version=2
= sectsz=512 sunit=0 blks, lazy-count=1
realtime =none extsz=4096 blocks=0, rtextents=0
data blocks changed from 262144 to 26214400
Note
If you receive an xfsctl failed: Cannot allocate memory error, you may need to update the Linux kernel on your instance. For more information, refer to your specific operating system documentation.
If you receive a The filesystem is already nnnnnnn blocks long. Nothing to do! error, see Expanding a Linux Partition.
Bootable flag (a) didn't worked in my case (EC2, centos6.5), so i had to re-create volume from snapshot.
After repeating all steps EXCEPT bootable flag - everything worked flawlessly so i was able to resize2fs after.
Thank you!
Don't have enough rep to comment above; but also note per the comments above that you can corrupt your instance if you start at 1; if you hit 'u' after starting fdisk before you list your partitions with 'p' this will infact give you the correct start number so you don't corrupt your volumes. For centos 6.5 AMI, also as mentioned above 2048 was correct for me.
Put space between name and number, ex:
sudo growpart /dev/xvda 1
Note that there is a space between the device name and the partition number.
To extend the partition on each volume, use the following growpart
commands. Note that there is a space between the device name and the
partition number.
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/recognize-expanded-volume-linux.html
I faced similar issue for Ubuntu system in EC2
Firstly checked the filesystem
lsblk
Then after increasing volume size from console, I ran below commands
sudo growpart /dev/nvme0n1 1
This will show change in lsblk command
Then I could then extend the FS with
sudo resize2fs /dev/nvme0n1p1
Finally verify it with df -h command, it will work

How to check for usb device with if statement in bash

I'm attempting to create an automated bash script that fills up a file with urandom in the unit's flash storage. I can manually use all of the commands to make this happen, but I'm trying to create a script and having difficulty figuring out how to check for the usb device. I know that it will be either sda1 or sdb1, but not sure if the code below is sufficient ...? Thanks! Below, is the code:
if /dev/sda1
then
mount -t vfat /dev/sda1 /media/usbkey
else
mount -t vfat /dev/sdb1 /media/usbkey
fi
The way I script mountable drives is to first put a file on the drive, e.g. "Iamthemountabledrive.txt", then check for the existence of that file. If it isn't there, then I mount the drive. I use this technique to make sure an audio server is mounted for a 5 radio station network, checking every minute in case there is a network interrupt event.
testfile="/dev/usbdrive/Iamthedrive.txt"
if [ -e "$testfile" ]
then
echo "drive is mounted."
fi
You can mount by label or UUID and hence reduce the complexity of your script. For example if your flash storage has label MYLABEL (you can set and display VFAT labels using mtools' mlabel):
$ sudo mount LABEL=MYLABEL /media/usbkey

Checking for availability of network disk on mac

I'm creating myself a script to automate the backing up of certain directories on my mac to an airdisk (usb disk on my airport extreme).
I was reading up about rsync. It seems that if the airdisk isn't mounted, rsync creates the directory in "/Volumes/the name of the disk".
This could fill up my hard drive and it isn't supposed to make the backup on my local drive.
Therefore I want to check if the mounted drive is available before I start the rsync command.
Can anyone help?
I would check to see if a file located in the mount exists. As long as you mount the disk in the same location each time, this should work.
if [ -f /Volumes/AirDisk/foo.txt ];
then
echo "AirDisk mounted. Starting backup"
#Put backup script here
else
echo "File does not exists"
exit 1
fi

Finding the path of a mounted network share on Mac OS X

I'd like to find out where a network share is mounted when the mount command fails like this:
$ mkdir ~/share
$ mount_afp afp://server/share ~/share
mount_afp: the volume is already mounted
This looked promising...
$ mount
... snip ...
afp_000000004oMw0q76003DF78u-1.2d000006 on /Volumes/share-1 (afpfs, nodev, nosuid, mounted by username)
afp_000000004oMw0q76003DF78u-2.2d000007 on /Volumes/share-2 (afpfs, nodev, nosuid, mounted by username)
It seems like there should be a way to map those long afp_000... numbers to URIs... Is there any way to determine where a volume is mounted given its afp:// URI?
I'm actually executing these commands with Python's subprocess module, so if there's a module or library that can do it that would be acceptable as well.
Try /Volumes/PUBLIC
(Or whatever Get Info tells you after looking at the file with Finder)
That's what worked for me.
Do you mean where it mounted on the remote server or where its mounted locally? If youre talking on the local system the mountpoint should be in /Network/Servers unless otherwise specified by fstab, autofs or an arg to mount. You could scan /Network/Servers for the share name...

Resources