I am writing a shell script (meant to work with Ubuntu only) that assumes that a disk has been previously open (using the command below) to make operations on it (resize2fs, lvcreate, ...). However, this might not always be the case, and when the disk is closed, the user of the script has to run this line before running the script, asking for his/her passphrase:
sudo cryptsetup luksOpen /dev/sdaX sdaX_crypt
Ideally, the script should start with this command, simplifying the user sequence. However, if the disk was indeed already opened, the script will fail because an encrypted disk cannot be opened twice.
How can I check if the disk was previously open? Is checking that /dev/mapper/sdX_crypt exists a valid solution / enough? If not or not possible, is there a way to make the command run only if necessary?
I'd also suggest lsblk - but since I came here to find some relevant info I did find and thought I'd post here the following command as well:
#: cryptsetup status <device> | grep -qi active
Cheers
You can use the lsblk command.
If the disk is already unlocked, it will display two lines: the device and the mapped device, where the mapped device should be of type crypt.
# lsblk -l -n /dev/sdaX
sdaX 253:11 0 2G 0 part
sdaX_crypt (dm-6) 253:11 0 2G 0 crypt
If the disk is not yet unlocked, it will only show the device.
# lsblk -l -n /dev/sdaX
sdaX 253:11 0 2G 0 part
Since I could not find a better solution, I went ahead and chose the "check if the device exists" one.
The encrypted disk embeds a specific Volume Group (called my-vg for the example), so my working solution is:
if [ ! -b /dev/my-vg ]; then
sudo cryptsetup luksOpen /dev/sdaX sdaX_crypt
fi
I check that /dev/my-vg exists instead of /dev/mapper/sda_cryptX because every other command in my script uses the first one as an argument so I kept it for consistency, but I reckon that this solution below looks more encapsulated:
if [ ! -b /dev/mapper/sdaX_crypt ]; then
sudo cryptsetup luksOpen /dev/sdaX sdaX_crypt
fi
Although the solution I described above works for me, is there a good reason I should switch to the latter one or it doesn't matter?
cryptsetup status volumeName
echo $? # Exit status should be 0 (success).
If you want to avoid displaying cryptsetup output, you can redirect it to /dev/null.
cryptsetup status volumeName > /dev/null
echo $? # Exit status should be 0 (success).
This is a snippet from a script I wrote last night to take daily snapshots.
DEVICE=/dev/sdx
HEADER=/root/luks/arch/sdx-header.img
KEY_FILE=/root/luks/arch/sdx-key.bin
VOLUME=luksHome
MOUNTPOINT=/home
SUBVOLUME=#home
# Ensure encrypted device is active.
cryptsetup status "${VOLUME}" > /dev/null
IS_ACTIVE=$?
while [[ $IS_ACTIVE -ne 0 ]]; do
printf "Volume '%s' does not seem to be active. Activate? [y/N]\n" $VOLUME
read -N 1 -r -s
if [[ $REPLY =~ ^[Yy]$ ]]; then
cryptsetup open --header="${HEADER}" --key-file="${KEY_FILE}" "${DEVICE}" "${VOLUME}"
IS_ACTIVE=$?
else
exit 0
fi
done
Related
I need to check if a file exists in a gitlab deployment pipeline. How to do it efficiently and reliably?
Use gsutil ls gs://bucket/object-name and check the return value for 0.
If the object does not exist, the return value is 1.
You can add the following Shell script in a Gitlab job :
#!/usr/bin/env bash
set -o pipefail
set -u
gsutil -q stat gs://your_bucket/folder/your_file.csv
PATH_EXIST=$?
if [ ${PATH_EXIST} -eq 0 ]; then
echo "Exist"
else
echo "Not Exist"
fi
I used gcloud cli and gsutil with stat command with -q option.
In this case, if the file exists the command returns 0 otherwise 1.
This answer evolved from the answer of Mazlum Tosun. Because I think it is a substantial improvement with less lines and no global settings switching it needs to be a separate answer.
Ideally the answer would be something like this
- gsutil stat $BUCKET_PATH
- if [ $? -eq 0 ]; then
... # do if file exists
else
... # do if file does not exists
fi
$? stores the exit_status of the previous command. 0 if success. This works fine in a local console. The problem with Gitlab will be that if the file does not exists, then "gsutil stat $BUCKET_PATH" will produce a non-zero exit code and the whole pipeline will stop at that line with an error. We need to catch the error, while still storing the exit code.
We will use the or operator || to suppress the error. FILE_EXISTS=false will only be executed if gsutil stat fails.
- gsutil stat $BUCKET_PATH || FILE_EXISTS=false
- if [ "$FILE_EXISTS" = false ]; then
... # do stuff if file does not exist
else
... # do stuff if file exists
fi
Also we can use the -q flag to let the command stats be silent if that is desired.
- gsutil -q stat $BUCKET_PATH || FILE_EXISTS=false
Problem: We generally face a problem where ubuntu os gets mounted readonly. Reason is clear as mentioned in fstab on errors=remount-ro.
Question: Is there any mechanism to reboot the appliance if it comes to readonly mounted state.
Tried: I tried to write a script as below which will get monitored by watchdog. This works but it continuously reboot if script return exit 1 due to any mount point is still readonly. What i expect is to check if uptime if less then a day then it should not reboot even though any mount point is readonly?
root#ubuntu1404:/home/ubuntu# cat /rofscheck.sh
#!/bin/bash
now=`date`
echo "------------"
echo "start : ${now}"
up_time=`awk '{print int($1)}' /proc/uptime`
#if uptime is less than 1 day then skip test
if [ "$up_time" -lt 86400 ]; then
echo "uptime is less ${now}, exit due to uptime"
exit 0
fi
grep -q ' ro' /proc/mounts > /dev/null 2>&1 || exit 0
# alert watchdog that fs is readonly.
exit 1
Now in /etc/watchdog.conf below config is done.
test-binary = /rofscheck.sh
To reproduce the problem to mount readonly all mounted fs, ran this:
$ echo u > /proc/sysrq-trigger
which does emergency remount readonly.
this script worked for me, even if it is much similar to yours.
I am running it on Ubuntu 14.04 64bit with latest updates.
#!/bin/bash
now=$(date)
up_time=$(awk '{print int($1)}' /proc/uptime)
min_time=7200
#if uptime is less than 2 hours then skip test
if [ ${up_time} -lt ${min_time} ]; then
echo "uptime is ${up_time} secs, less than ${min_time} secs (now is ${now}): exit 0 due to uptime"
exit 0
fi
exit=$(grep ' ro,' /proc/mounts | wc -l)
if [ ${exit} -gt 0 ]; then
exit 1
else
exit 0
fi
Please note that I have setup the minimum time as a variable, adjust it at your convenience.
A small important notice:
I am using this solution on a very cheap couple of cloud servers I own that I use for testing purposes only.
I have also setup filesystem check at every reboot with the following command:
tune2fs -c 1 /dev/sda1
I would never use this kind of watchdog usage in a production environment.
Hope this helps.
Best regards.
First, I want to share my experience about how to make a USB pendrive of Ubuntu live iso, which is multiboot and it can duplicate itself by a bash code. I am trying to guide you to make something like that, then, as long as I'm not an expert, asking how can I make it faster(while booting, operating or cloning)?
First of all, you should partition your usb flash driver to two partitions by some tools like GParted. One fat32 partition and the other ext2 with the fix size of 5500MB(if you change its size then you have to change this number in the bash code too). You can find the size of the first partition by the whole size of your usb flash drive minus the size of second partition.
Second, you must download ubuntu iso image(I downloaded lubuntu 13.10 because it's faster, but I think ubuntu must work too) then copy it in the first partition(the fat32 partition.) and rename it to ubuntu.iso.
Third, run this command to install grub bootloader(you can find this command in the bash code too)
sudo grub-install --force --no-floppy --boot-directory=/mnt/usb2/boot /dev/sdc1
"/mnt/usb2" directory is the one that you mounted the first partition and /dev/sdc1 is its device. If you don't know about this information just use fdisk -l or Menu->Preferences->Disks to find out. Then copy the following files in their mentioned directories and reboot to usb flash(for my motherboard by pushing F12 then selecting my flash device from the "HDD Hard" list .)
/path to the first partition/boot/grub/grub.cfg
set timeout=10
set default=0
menuentry "Run Ubuntu Live ISO Persistent" {
loopback loop /ubuntu.iso
linux (loop)/casper/vmlinuz persistent boot=casper iso-scan/filename=/ubuntu.iso noeject noprompt splash --
initrd (loop)/casper/initrd.lz
}
menuentry "Run Ubuntu Live ISO(for clone to a new USB drive)" {
loopback loop /ubuntu.iso
linux (loop)/casper/vmlinuz boot=casper iso-scan/filename=/ubuntu.iso noeject noprompt splash --
initrd (loop)/casper/initrd.lz
}
the bash code:
/path to the first partition/boot/liveusb-installer
#!/bin/bash
destUSB=$1
# insert mountpoint, receive device name
get_block_from_mount() {
dir=$(readlink -f $1)
MOUNTNAME=`echo $dir | sed 's/\\/$//'`
if [ "$MOUNTNAME" = "" ] ; then
echo ""
return 1
fi
BLOCK_DEVICE=`mount | grep "$MOUNTNAME " | cut -f 1 -d " "`
if [ "$BLOCK_DEVICE" = "" ] ; then
echo ""
return 2
fi
echo $BLOCK_DEVICE
return 0
}
sdrive=$(echo $destUSB | sed 's/\/dev\///')
if ! [ -f /sys/block/$sdrive/capability ] || ! [ $(($(< /sys/block/$sdrive/capability )&1)) -ne 0 ]
then
echo "Error: The argument must be the destination usb in /dev directory!"
echo "If you don't know this information just try 'sudo fdisk -l' or use Menu->Prefrences->Disks"
exit 1
fi
srcDirectory=/isodevice
srcDev=`get_block_from_mount $srcDirectory`
srcUSB="${srcDev%?}"
if [ $srcUSB == $destUSB ]; then
echo "Error: The argument of device is wrong! It's the source USB drive."
exit 1
fi
diskinfo=`sudo parted -s $destUSB print`
echo "$diskinfo"
# Find size of disk
v_disk=$(echo "$diskinfo"|awk '/^Disk/ {print $3}'|sed 's/[Mm][Bb]//')
second_disk=5500
if [ "$v_disk" -lt "6500" ]; then
echo "Error: the disk is too small!!"
exit 1
elif [ "$v_disk" -gt "65000" ]; then
echo "Error: the disk is too big!!"
exit 1
fi
echo "Partitioning ."
# Remove each partition
for v_partition in $(echo "$diskinfo" |awk '/^ / {print $1}')
do
umount -l ${destUSB}${v_partition}
parted -s $destUSB rm ${v_partition}
done
# Create partitions
let first_disk=$v_disk-$second_disk
parted -s $destUSB mkpart primary fat32 1 ${first_disk}
parted -s $destUSB mkpart primary ext2 ${first_disk} ${v_disk}
echo "Formatting .."
# Format the partition
mkfs.vfat ${destUSB}1
mkfs.ext2 ${destUSB}2 -L home-rw
echo "Install grub into ${destUSB}1 ..."
mkdir /mnt/usb2
mount ${destUSB}1 /mnt/usb2
grub-install --force --no-floppy --boot-directory=/mnt/usb2/boot $destUSB
cp $srcDirectory/boot/grub/grub.cfg /mnt/usb2/boot/grub
cp $srcDirectory/boot/liveusb-installer /mnt/usb2/boot
echo "Copy ubuntu.iso from ${srcUSB}1 to ${destUSB}1......"
cp $srcDirectory/ubuntu.iso /mnt/usb2
umount -l ${destUSB}1
rm -r /mnt/usb2
echo "Copy everything from ${srcUSB}2 to ${destUSB}2 ............"
dd if=${srcUSB}2 of=${destUSB}2
echo "It's done!"
exit 0
So after that if you want to clone this flash, just reboot to the second option of grub boot loader then put another usb flash drive on and run liveusb-installer /dev/sdc. It will make another usb drive with every installed apps from the first one into /dev/sdc drive. I made this code so all of my students have the same flash drive to practice programming with c, python or sage everywhere. The speed of non-persistent (the second option in grub menu) is fine, but the fist option, which is the persistent one, is take 3-4 min to boot and after that its a little bit slow! Also, the installation(duplication) take a half an hour to complete! Is there any improvement to make it faster in any way?
any suggestion will be appreciated
As I said before, if lubuntu boots non-persistent, it will be faster. So I infer that if I just keep home directory persistent then the rest of folders of root directory would be in the RAM, then it must be faster. For achieving this, I changed it a little bit to boot with /home persistent and install every application after each boot, automatically. It's turned out in this way the boot time doesn't changed(booting + installing) but operating is so faster, which is great for me.
I didn't change grub.cfg at all. I changed the bash code(liveusb-installer) to label the second partition to home-rw, so the rest of folders just stay in RAM.
In the bash code: /path to the first partition/boot/liveusb-installer, just change the line of mkfs.ext2 ${destUSB}2 -L casper-rw to mkfs.ext2 ${destUSB}2 -L home-rw.
After changing liveusb-installer you can use that when you want to clone this USB drive. If you installed it before(by using above recipes) then just go to second option of grub menu(the non-persistent one) then format the second partition and label it to home-rw. After that just reboot to first option of grub menu, then become online and install any program that you wish to be there always.
sudo apt-get update
sudo apt-get install blablabla
After installing, copy every packages and lists to ~/apt directory by running these commands:
mkdir ~/apt
mkdir ~/apt/lubuntu-archives
mkdir ~/apt/lubuntu-lists
cp /var/cache/apt/archives/*.deb ~/apt/lubuntu-archives
cp /var/lib/apt/lists/*ubuntu* ~/apt/lubuntu-lists
Now copy following files in ~/apt directory
/home/lubuntu/apt/start-up
#!/bin/bash
apt_dir=/home/lubuntu/apt
# This script meant to open by /home/lubuntu/apt/autostart
for file in $(ls $apt_dir/lubuntu-archives)
do
ln -s $apt_dir/lubuntu-archives/$file /var/cache/apt/archives/$file
done
for file in $(ls $apt_dir/lubuntu-lists)
do
ln -s $apt_dir/lubuntu-lists/$file /var/lib/apt/lists/$file
done
apt-get install -y binutils gcc g++ make m4 perl tar \
vim codeblocks default-jre synapse
exit 0
Also change the above packages to blablabla of the install command.
/home/lubuntu/apt/autostart
#!/bin/bash
# This script meant to open by /home/lubuntu/.config/lxsession/Lubuntu/autostart
# or autostart of "Menu->Perferences->Default applications for LXSession"
xterm -e /usr/bin/sudo /bin/bash /home/lubuntu/apt/start-up
synapse
Then edit this file /home/lubuntu/.config/lxsession/Lubuntu/autostart and add the address of above file into it. Like this:
/home/lubuntu/apt/autostart
Now after each reboot a nice terminal will be opened and all the packages will install as I wish! The advantage of this method over persistent root directory is the much faster operation, for instance, opening windows or running programs. But the time of cloning and booting are steal long. I will be so glad that anybody helps me to make it more professional and faster.
I need help with two scripts I'm trying to make as one. There are two different ways to detect if there are issues with a bad NFS mount. One is if there is an issue, doing a df will hang and the other is the df works but there is are other issues with the mount which a find (mount name) -type -d will catch.
I'm trying to combine the scripts to catch both issues to where it runs the find type -d and if there is an issue, return an error. If the second NFS issue occurs and the find hangs, kill the find command after 2 seconds; run the second part of the script and if the NFS issue is occurring, then return an error. If neither type of NFS issue is occurring then return an OK.
MOUNTS="egrep -v '(^#)' /etc/fstab | grep nfs | awk '{print $2}'"
MOUNT_EXCLUDE=()
if [[ -z "${NFSdir}" ]] ; then
echo "Please define a mount point to be checked"
exit 3
fi
if [[ ! -d "${NFSdir}" ]] ; then
echo "NFS CRITICAL: mount point ${NFSdir} status: stale"
exit 2
fi
cat > "/tmp/.nfs" << EOF
#!/bin/sh
cd \$1 || { exit 2; }
exit 0;
EOF
chmod +x /tmp/.nfs
for i in ${NFSdir}; do
CHECK="ps -ef | grep "/tmp/.nfs $i" | grep -v grep | wc -l"
if [ $CHECK -gt 0 ]; then
echo "NFS CRITICAL : Stale NFS mount point $i"
exit $STATE_CRITICAL;
else
echo "NFS OK : NFS mount point $i status: healthy"
exit $STATE_OK;
fi
done
The MOUNTS and MOUNT_EXCLUDE lines are immaterial to this script as shown.
You've not clearly identified where ${NFSdir} is being set.
The first part of the script assumes ${NFSdir} contains a single directory value; the second part (the loop) assumes it may contain several values. Maybe this doesn't matter since the loop unconditionally exits the script on the first iteration, but it isn't the clear, clean way to write it.
You create the script /tmp/.nfs but:
You don't execute it.
You don't delete it.
You don't allow for multiple concurrent executions of this script by making a per-process file name (such as /tmp/.nfs.$$).
It is not clear why you hide the script in the /tmp directory with the . prefix to the name. It probably isn't a good idea.
Use:
tmpcmd=${TMPDIR:-/tmp}/nfs.$$
trap "rm -f $tmpcmd; exit 1" 0 1 2 3 13 15
...rest of script - modified to use the generated script...
rm -f $tmpcmd
trap 0
This gives you the maximum chance of cleaning up the temporary script.
There is no df left in the script, whereas the question implies there should be one. You should also look into the timeout command (though commands hung because NFS is not responding are generally very difficult to kill).
I have written a small bash (4) script to backup shares from my windows pc's. Currently I backup only one share and the backup is only visible to root. Please give me some hints for improvements of that piece of code:
#!/bin/bash
# Script to back up directories on several windows machines
# Permissions, owners, etc. are preserved (-av option)
# Only differences are submitted
# Execute this script from where you want
# Make sure only root can run our script
if [ "$(id -u)" != "0" ]; then
echo "This script must be run as root" 1>&2
exit 1
fi
# Specify the current date for the log-file name
current_date=$(date +%Y-%m-%d_%H%M%S)
# Specify the path to a list of file patterns not included in backup
script_path=$(dirname $(readlink -f $0))
rsync_excludes=$script_path/rsync_exclude.patterns
# Specify mount/rsync options
credential_file="/root/.smbcredentials"
# Specify windows shares
smb_shares=( //192.168.0.100/myFiles )
# Specify the last path component of the directory name to backup shares
# content into
smb_share_ids=( "blacksmith" )
# Specify with trailing '/' to transfer only the dir content
rsync_src="/mnt/smb_backup_mount_point/"
rsync_dst_root=(~/BACKUPS)
# Check if all arrays have the same size
if [ "${#smb_shares[#]}" -ne "${#smb_share_ids[#]}" ]; then
echo "Please specify one id for each samba share!"
exit 1
fi
# Run foor loop to sync all specified shares
for (( i = 0 ; i < ${#smb_shares[#]} ; i++ ))
do
# Check if mount point already exists
echo -n "Checking if mount point exists ... "
if [ -d $rsync_src ]; then
echo "Exists, exit!"
exit 1
else
echo "No, create it"
mkdir $rsync_src
fi
# Try to mount share and perform rsync in case of success
echo -n "Try to mount ${smb_shares[$i]} to $rsync_src ... "
mount -t cifs ${smb_shares[$i]} $rsync_src -o credentials=/root/.smbcredentials,iocharset=utf8,uid=0,file_mode=0600,dir_mode=0600
if [ "$?" -eq "0" ]; then
echo "Success"
# Specify the log-file name
rsync_logfile="$rsync_dst_root/BUP_${smb_share_ids[$i]}_$current_date.log"
# Build rsync destination root
rsync_dst=( $rsync_dst_root"/"${smb_share_ids[$i]} )
# Check if rsync destination root already exists
echo -n "Check if rsync destination root already exists ... "
if [ -d $rsync_dst ]; then
echo "Exists"
else
echo "Does not exist, create it"
mkdir -p $rsync_dst
fi
# Run rsync process
# -av > archieve (preserve owner, permissions, etc.) and verbosity
# --stats > print a set of statistics showing the effectiveness of the rsync algorithm for your files
# --bwlimit=KBPS > transfer rate limit - 0 defines no limit
# --progress > show progress
# --delete > delete files in $DEST that have been deletet in $SOURCE
# --delete-after > delete files at the end of the file transfer on the receiving machine
# --delete-excluded > delete excluded files in $DEST
# --modify-window > files differ first after this modification time
# --log-file > save log file
# --exclude-from > exclude everything from within an exclude file
rsync -av --stats --bwlimit=0 --progress --delete --delete-after --delete-excluded --modify-window=2 --log-file=$rsync_logfile --exclude-from=$rsync_excludes $rsync_src $rsync_dst
fi
# Unmount samba share
echo -n "Unmount $rsync_src ... "
umount $rsync_src
[ "$?" -eq "0" ] && echo "Success"
# Delete mount point
echo -n "Delete $rsync_src ... "
rmdir $rsync_src
[ "$?" -eq "0" ] && echo "Success"
done
Now I need some help concerning following topics:
Checking if conditions like share existence, mount point existence (to make a fool proof script)
The mount command - is it correct, do I give the correct permissions?
Is there a better place for the backup files than the home directory of a user if only root can see that files?
Do you think it would be helpful to integrate the backup of other file systems too?
The backup is rather slow (around 13mb/s) although I have a gigabit network system - possibly this is because of the ssh encryption of rsync? The linux system, where the share is mounted on, has a pci sata controller and an old mainboard, 1gb ram and an athlon xp 2400+. Could there be other reasons for the slowness?
If you have more topics that can be addressed here - be welcome to post them. I'm interested =)
Cheers
-Blackjacx
1) There is a lot of error checking you can do to make this script more robust: check for existance and execute status of external executables (rsync, id, date, dirname, etc.), check the output of rsync (with if [ 0 -ne $?]; then), ping the remote machine to make sure it is on the network, check to make sure thet user you are doing the backup for has a home directory on the local machine, check to make sure the destination directory has enough space for the rsync, etc.
2) mount command looks ok, did you want to try to mount as read only?
3) Depends on the number of users and the size of the backup. For small backups the home directory is likely ok, but if the backups are large, especialy if you might run out of disk space, a dedicated backup location would be nice.
4) Depends on what the purpose of the backup is. In my experience there are five kinds of things that people backup: user data, system data, user configurations, system configurations, entire disks or filesystems. Is there a seperate system task that backs-up the system data and configurations?
5) What other tasks are running while the backup takes place? If the backup is running automated at say 1 AM, there may be other system intensive tasks scheduled for the same time. What is your typical throughput rate for other kinds of data?
You are performing an array check on non-array variables.
Use a variable to specify the remote machine.
# Remote machine
declare remote_machine="192.168.0.100"
# Specify windows shares array
declare -a smb_shares=([0]="//$remote_machine/myFiles"
[1]="//$remote_machine/otherFiles" )
# Specify the last path component of the directory name to backup shares
# content into
declare -a smb_share_ids=( [1]="blacksmith" [2]="tanner")
that way you can
ping -c 1 $remote_machine
if [ 0 -ne $? ] ; then
echo "ping on $remote_machine Failed, exiting..."
exit 1
fi
If rsync_src is only used for this mackup, you might wat to try
mnt_dir="/mnt"
rsync_src=`mktemp -d -p $mnt_dir`
and I would make this dir before the for loop otherwise you are creating and destroying it for every directory you back up.