First, I want to share my experience about how to make a USB pendrive of Ubuntu live iso, which is multiboot and it can duplicate itself by a bash code. I am trying to guide you to make something like that, then, as long as I'm not an expert, asking how can I make it faster(while booting, operating or cloning)?
First of all, you should partition your usb flash driver to two partitions by some tools like GParted. One fat32 partition and the other ext2 with the fix size of 5500MB(if you change its size then you have to change this number in the bash code too). You can find the size of the first partition by the whole size of your usb flash drive minus the size of second partition.
Second, you must download ubuntu iso image(I downloaded lubuntu 13.10 because it's faster, but I think ubuntu must work too) then copy it in the first partition(the fat32 partition.) and rename it to ubuntu.iso.
Third, run this command to install grub bootloader(you can find this command in the bash code too)
sudo grub-install --force --no-floppy --boot-directory=/mnt/usb2/boot /dev/sdc1
"/mnt/usb2" directory is the one that you mounted the first partition and /dev/sdc1 is its device. If you don't know about this information just use fdisk -l or Menu->Preferences->Disks to find out. Then copy the following files in their mentioned directories and reboot to usb flash(for my motherboard by pushing F12 then selecting my flash device from the "HDD Hard" list .)
/path to the first partition/boot/grub/grub.cfg
set timeout=10
set default=0
menuentry "Run Ubuntu Live ISO Persistent" {
loopback loop /ubuntu.iso
linux (loop)/casper/vmlinuz persistent boot=casper iso-scan/filename=/ubuntu.iso noeject noprompt splash --
initrd (loop)/casper/initrd.lz
}
menuentry "Run Ubuntu Live ISO(for clone to a new USB drive)" {
loopback loop /ubuntu.iso
linux (loop)/casper/vmlinuz boot=casper iso-scan/filename=/ubuntu.iso noeject noprompt splash --
initrd (loop)/casper/initrd.lz
}
the bash code:
/path to the first partition/boot/liveusb-installer
#!/bin/bash
destUSB=$1
# insert mountpoint, receive device name
get_block_from_mount() {
dir=$(readlink -f $1)
MOUNTNAME=`echo $dir | sed 's/\\/$//'`
if [ "$MOUNTNAME" = "" ] ; then
echo ""
return 1
fi
BLOCK_DEVICE=`mount | grep "$MOUNTNAME " | cut -f 1 -d " "`
if [ "$BLOCK_DEVICE" = "" ] ; then
echo ""
return 2
fi
echo $BLOCK_DEVICE
return 0
}
sdrive=$(echo $destUSB | sed 's/\/dev\///')
if ! [ -f /sys/block/$sdrive/capability ] || ! [ $(($(< /sys/block/$sdrive/capability )&1)) -ne 0 ]
then
echo "Error: The argument must be the destination usb in /dev directory!"
echo "If you don't know this information just try 'sudo fdisk -l' or use Menu->Prefrences->Disks"
exit 1
fi
srcDirectory=/isodevice
srcDev=`get_block_from_mount $srcDirectory`
srcUSB="${srcDev%?}"
if [ $srcUSB == $destUSB ]; then
echo "Error: The argument of device is wrong! It's the source USB drive."
exit 1
fi
diskinfo=`sudo parted -s $destUSB print`
echo "$diskinfo"
# Find size of disk
v_disk=$(echo "$diskinfo"|awk '/^Disk/ {print $3}'|sed 's/[Mm][Bb]//')
second_disk=5500
if [ "$v_disk" -lt "6500" ]; then
echo "Error: the disk is too small!!"
exit 1
elif [ "$v_disk" -gt "65000" ]; then
echo "Error: the disk is too big!!"
exit 1
fi
echo "Partitioning ."
# Remove each partition
for v_partition in $(echo "$diskinfo" |awk '/^ / {print $1}')
do
umount -l ${destUSB}${v_partition}
parted -s $destUSB rm ${v_partition}
done
# Create partitions
let first_disk=$v_disk-$second_disk
parted -s $destUSB mkpart primary fat32 1 ${first_disk}
parted -s $destUSB mkpart primary ext2 ${first_disk} ${v_disk}
echo "Formatting .."
# Format the partition
mkfs.vfat ${destUSB}1
mkfs.ext2 ${destUSB}2 -L home-rw
echo "Install grub into ${destUSB}1 ..."
mkdir /mnt/usb2
mount ${destUSB}1 /mnt/usb2
grub-install --force --no-floppy --boot-directory=/mnt/usb2/boot $destUSB
cp $srcDirectory/boot/grub/grub.cfg /mnt/usb2/boot/grub
cp $srcDirectory/boot/liveusb-installer /mnt/usb2/boot
echo "Copy ubuntu.iso from ${srcUSB}1 to ${destUSB}1......"
cp $srcDirectory/ubuntu.iso /mnt/usb2
umount -l ${destUSB}1
rm -r /mnt/usb2
echo "Copy everything from ${srcUSB}2 to ${destUSB}2 ............"
dd if=${srcUSB}2 of=${destUSB}2
echo "It's done!"
exit 0
So after that if you want to clone this flash, just reboot to the second option of grub boot loader then put another usb flash drive on and run liveusb-installer /dev/sdc. It will make another usb drive with every installed apps from the first one into /dev/sdc drive. I made this code so all of my students have the same flash drive to practice programming with c, python or sage everywhere. The speed of non-persistent (the second option in grub menu) is fine, but the fist option, which is the persistent one, is take 3-4 min to boot and after that its a little bit slow! Also, the installation(duplication) take a half an hour to complete! Is there any improvement to make it faster in any way?
any suggestion will be appreciated
As I said before, if lubuntu boots non-persistent, it will be faster. So I infer that if I just keep home directory persistent then the rest of folders of root directory would be in the RAM, then it must be faster. For achieving this, I changed it a little bit to boot with /home persistent and install every application after each boot, automatically. It's turned out in this way the boot time doesn't changed(booting + installing) but operating is so faster, which is great for me.
I didn't change grub.cfg at all. I changed the bash code(liveusb-installer) to label the second partition to home-rw, so the rest of folders just stay in RAM.
In the bash code: /path to the first partition/boot/liveusb-installer, just change the line of mkfs.ext2 ${destUSB}2 -L casper-rw to mkfs.ext2 ${destUSB}2 -L home-rw.
After changing liveusb-installer you can use that when you want to clone this USB drive. If you installed it before(by using above recipes) then just go to second option of grub menu(the non-persistent one) then format the second partition and label it to home-rw. After that just reboot to first option of grub menu, then become online and install any program that you wish to be there always.
sudo apt-get update
sudo apt-get install blablabla
After installing, copy every packages and lists to ~/apt directory by running these commands:
mkdir ~/apt
mkdir ~/apt/lubuntu-archives
mkdir ~/apt/lubuntu-lists
cp /var/cache/apt/archives/*.deb ~/apt/lubuntu-archives
cp /var/lib/apt/lists/*ubuntu* ~/apt/lubuntu-lists
Now copy following files in ~/apt directory
/home/lubuntu/apt/start-up
#!/bin/bash
apt_dir=/home/lubuntu/apt
# This script meant to open by /home/lubuntu/apt/autostart
for file in $(ls $apt_dir/lubuntu-archives)
do
ln -s $apt_dir/lubuntu-archives/$file /var/cache/apt/archives/$file
done
for file in $(ls $apt_dir/lubuntu-lists)
do
ln -s $apt_dir/lubuntu-lists/$file /var/lib/apt/lists/$file
done
apt-get install -y binutils gcc g++ make m4 perl tar \
vim codeblocks default-jre synapse
exit 0
Also change the above packages to blablabla of the install command.
/home/lubuntu/apt/autostart
#!/bin/bash
# This script meant to open by /home/lubuntu/.config/lxsession/Lubuntu/autostart
# or autostart of "Menu->Perferences->Default applications for LXSession"
xterm -e /usr/bin/sudo /bin/bash /home/lubuntu/apt/start-up
synapse
Then edit this file /home/lubuntu/.config/lxsession/Lubuntu/autostart and add the address of above file into it. Like this:
/home/lubuntu/apt/autostart
Now after each reboot a nice terminal will be opened and all the packages will install as I wish! The advantage of this method over persistent root directory is the much faster operation, for instance, opening windows or running programs. But the time of cloning and booting are steal long. I will be so glad that anybody helps me to make it more professional and faster.
Related
I have built a custom board with a iMX6 Processor.
Unfortunately i have not stripped out the Bootloader Config pins. They are still unconnected BGA-Balls. I do have access to UART1-3, JTAG and SD-Card interface and also to the BOOT0 and BOOT1 pins.
No i would like to start a U-Boot. Therefore i have ported or added my own board to the configs. I can build U-Boot successfull (not tested on the board yet).
Then i thought, i could download u-boot into the internal RAM of the i.MX6. unfortunately the i.MX got only 68kb RAM. The u-boot is about 160kb.
After some googling i saw that there is a possibility to compile a SP-Loader (SPL) which will first start, then load u-boot from SD-Card into the DDR3 RAM and then start the regular U-Boot from external DDR3 RAM.
I also found this readme:
https://github.com/ARM-software/u-boot/blob/master/doc/README.SPL
This is my actual defconfig file:
CONFIG_ARM=y
CONFIG_ARCH_MX6=y
CONFIG_TARGET_EVAL1A=y
CONFIG_MXC_UART=y
CONFIG_DM_MMC=y
CONFIG_SYS_TEXT_BASE=0x87800000
CONFIG_SPL_TEXT_BASE=0x0907000
CONFIG_SPL=y
CONFIG_SPL_BUILD=y
CONFIG_SPL_SERIAL_SUPPORT=y
CONFIG_SPL_FS_FAT=y
CONFIG_SPL=y
Im a little bit confused about SYS_TEXT_BASE and SPL_TEXT_BASE.
I think SPL_TEXT_BASE is where my SPL will reside? So 907000 is the start of the internal RAM. SYS_TEXT_BASE should be the start of the external DDR3 RAM right?
Anway, with the above config and the following commands:
make mrproper
make myBoard_config
make
I only get the regular u-boot.bin which is about 160kb in size.
What do i wrong? How can i build the SPL into a separate binary?
Thanks.
Edit: i solved it that way:
If you build linux by your self for your own board of even for an other board then you often get three different files.
U-Boot Bootloader
Linux Kernel (zImage, uImage)
rootfs (debian, ubuntu etc.)
Usually you get some advice how to format your SD-Card properly and copy your files onto them. But normally it is more convenient to have only one single image file which you can flash onto your card using some 3rd party tool like balena etcher.
During my development of the i.MX6 DevBoard i have created a shell script which does exactly this. Combining all these three parts into one image.
I provide this script here for free, without any further comments or instructions. This script was tested with an i.MX6 Controller using mainline U-Boot and Mainline Linux.
The script is provided without any warranty. Use it at your own risk.
#!/bin/bash
# Copyright: C. Hediger, 2019
# databyte.ch
# provided without warranty. use it at your own risk!
echo "-------------------------------------------"
echo "-------- SD-Card image generator ----------"
echo "-------------------------------------------"
echo ""
#echo "Please enter the Size of your Image"
read -p 'Size for *.img [MB] default 512MByte: ' imagesize
if [ -z "$imagesize" ]
then
imagesize="512"
fi
#echo "Please enter the destination of the image"
read -p 'Path to Image default /home/<user>/Desktop/sdcard/sdimage.img: ' imagepath
if [ -z "$imagepath" ]
then
imagepath="/home/"$USER"/Desktop/sdcard/sdimage.img"
fi
#echo "Please enter the path of the u-boot.imx"
read -p 'Path to u-boot default /home/<user>/Desktop/sdcard/u-boot.imx: ' ubootpath
if [ -z "$ubootpath" ]
then
ubootpath="/home/"$USER"/Desktop/sdcard/u-boot.imx"
fi
#echo "Please enter the path of the rootfs.tar.gz"
read -p 'Path to rootfs default /home/<user>/Desktop/sdcard/rootfs.tar.gz: ' rootfspath
if [ -z "$rootfspath" ]
then
rootfspath="/home/"$USER"/Desktop/sdcard/rootfs.tar.gz"
fi
read -p 'Strip output directory? Default 1: ' stripcount
if [ -z "$stripcount" ]
then
stripcount="1"
fi
#echo "Please enter the path of the kernel"
read -p 'Path to kernel default /home/<user>/Desktop/sdcard/zImage: ' kernelpath
if [ -z "$kernelpath" ]
then
kernelpath="/home/"$USER"/Desktop/sdcard/zImage"
fi
#echo "Please enter the path of the device tree blob"
read -p 'Path to *.dtb default /home/<user>/Desktop/sdcard/eval1a.dtb: ' dtbpath
if [ -z "$dtbpath" ]
then
dtbpath="/home/"$USER"/Desktop/sdcard/eval1a.dtb"
fi
ddimagesize=$((imagesize * 2))k
partitionsize=+$((imagesize - 20))M
#echo $imagesize
#echo $imagepath
#echo $partitionsize
dd status=progress if=/dev/zero of=$imagepath bs=512 count=$ddimagesize
(
echo o # Create a new empty DOS partition table
echo p # Add a new partition
#echo u # change units to cylinders
echo x # expert mode
echo h # Partition number
echo 255
echo s
echo 63
echo c
echo
echo r
echo n
echo p
echo 1
echo 4096
echo $partitionsize
echo p
echo w
) | fdisk $imagepath
dd bs=512 seek=2 conv=notrunc if=$ubootpath of=$imagepath
loodevice="$(sudo losetup --partscan --show --find $imagepath)"
loopartition="$loodevice"p1
mountfolder=/mnt/sdcardtmpfolder
echo "Device: "$loodevice
echo "Partition: "$loopartition
sudo mkfs.ext2 $loopartition
sudo mkdir -p $mountfolder
sudo mount $loopartition $mountfolder
sudo cp $rootfspath "$mountfolder"/rootfs.tar.gz
sudo tar xzf "$mountfolder"/rootfs.tar.gz -C $mountfolder --strip-components=$stripcount
sudo cp $kernelpath "$mountfolder"/boot/zImage
sudo cp $dtbpath "$mountfolder"/boot/imx6ull-dtb-eval1a.dtb
sudo rm "$mountfolder"/rootfs.tar.gz
echo " ----- Please extract rootfs -----"
read
sudo umount /mnt/sdcardtmpfolder
sudo fsck.ext4 $loopartition
sudo losetup -d $loodevice
sudo rm -R /mnt/sdcardtmpfolder
sudo gparted $imagepath
I'm writing an auto update facility for my cross platform application. The updater portion downloads the installer file and executes a shell command to install it. On MacOS our "installer" takes the form of .dmg file. I need to be able to silently mount the disk image, copy/overwrite the contained .app(s) to the destination directory, then unmount the disk image. I am assuming the disk image contains a bundle that can be directly copied to /Applications or elsewhere. There is no sensible way to handle an arbitrary .dmg file as asked before, as its contents cannot be known. Some assumptions must be made.
VOLUME=$(hdiutil attach -nobrowse '[DMG FILE]' |
tail -n1 | cut -f3-; exit ${PIPESTATUS[0]}) &&
(rsync -a "$VOLUME"/*.app /Applications/; SYNCED=$?
(hdiutil detach -force -quiet "$VOLUME" || exit $?) && exit "$SYNCED")
I'll break this down:
hdiutil attach -nobrowse '[DMG FILE]' Mount the disk image, but don't show it on the desktop
| tail -n1 | cut -f3- Discard the first two tokens of hdiutil's last line output, leaving the remainder, which is the mounted volume
VOLUME=$(...; exit ${PIPESTATUS[0]}) Set VOLUME to the output of the above, and set the exit code to that of hdiutil
&& If the disk image was mounted successfully...
rsync -a "$VOLUME"/*.app /Applications/ ...use rsync to copy the .app files to the /Applications directory, while preserving permissions/symlinks/ownership etc.
; SYNCED=$? Store result of rsync operation
(hdiutil detach -force -quiet "$VOLUME" force unmount the disk image
|| exit $?) && "$SYNCED" Exit with hdiutil exit code, or rsync exit code if hdiutil succeeded
in case I accidentally modify/delete important documents, my linux PC makes daily backups with a script that gets executed by cron and contains the following line.
rsync --checksum --recursive ${source} ${dest}/$i --link-dest=${dest}/$((i-1))
(${source} ist the path of the documents folder, ${dest}/n is the path of the n-th backup.)
Using the --link-dest option has the great advantage, that if you backup a 3 GB Folder, change on small file and backup again, both backups combined need 3 GB disk space, instead of 6 GB if I would run rsync without the --link-dest option.
I'm struggling to write a similar script for windows: I could just use the cp -r powershell command (or the xcopy cmd command) but this command does not have an option that is similar to rsync's --link-dest option. Using the linux subsystem for windows for the rsync command works, but scripts in the cron.daily folder inside the linux subsystem for windows do net get executed daily.
TLDR: What is the windows equivalent of rsync -r pathA pathB --link-dest pathC
PS: In case anyone wants the linux version of the script for his own backups, here it is:
#!/bin/bash
source=/home/username/documents
dest=/myBackup
if [ "$1" == "--install" ] ; then
echo "installing..."
cp $0 /etc/cron.daily/myBackupScript
mkdir $dest
echo "installed"
exit 0
fi
for i in {0..9999}; do
if [ ! -e ${dest}/$i ]; then
echo "Copying to " ${dest}/$i
if [ -d ${dest}/$((i-1)) ]; then
rsync --checksum --recursive ${source} ${dest}/$i --link-dest=${dest}/$((i-1))
else
rsync --checksum --recursive ${source} ${dest}/$i
fi
DATE=`date +%Y-%m-%d__%H:%M:%S`
touch ${dest}/$i/$DATE
exit 0
fi
done
echo "unable to do backup"
exit 4
The current rsync version (3.2.2) from the MSYS2 collection for Windows (install: pacman -S rsync), supports the --link-dest hardlink re-use option correctly on NTFS. It also supports NTFS unicode filenames now.
Absolute paths have to be given in MSYS / Cygwin convention - e.g. /C/path/to/source/.
Note: So far (2021-02) the MSYS2 rsync cannot create / replicate symbolic links in the destination using any of the symlink options. It would create content copies instead. Yet it can detect and exclude symlinks in the source.
I am writing a shell script (meant to work with Ubuntu only) that assumes that a disk has been previously open (using the command below) to make operations on it (resize2fs, lvcreate, ...). However, this might not always be the case, and when the disk is closed, the user of the script has to run this line before running the script, asking for his/her passphrase:
sudo cryptsetup luksOpen /dev/sdaX sdaX_crypt
Ideally, the script should start with this command, simplifying the user sequence. However, if the disk was indeed already opened, the script will fail because an encrypted disk cannot be opened twice.
How can I check if the disk was previously open? Is checking that /dev/mapper/sdX_crypt exists a valid solution / enough? If not or not possible, is there a way to make the command run only if necessary?
I'd also suggest lsblk - but since I came here to find some relevant info I did find and thought I'd post here the following command as well:
#: cryptsetup status <device> | grep -qi active
Cheers
You can use the lsblk command.
If the disk is already unlocked, it will display two lines: the device and the mapped device, where the mapped device should be of type crypt.
# lsblk -l -n /dev/sdaX
sdaX 253:11 0 2G 0 part
sdaX_crypt (dm-6) 253:11 0 2G 0 crypt
If the disk is not yet unlocked, it will only show the device.
# lsblk -l -n /dev/sdaX
sdaX 253:11 0 2G 0 part
Since I could not find a better solution, I went ahead and chose the "check if the device exists" one.
The encrypted disk embeds a specific Volume Group (called my-vg for the example), so my working solution is:
if [ ! -b /dev/my-vg ]; then
sudo cryptsetup luksOpen /dev/sdaX sdaX_crypt
fi
I check that /dev/my-vg exists instead of /dev/mapper/sda_cryptX because every other command in my script uses the first one as an argument so I kept it for consistency, but I reckon that this solution below looks more encapsulated:
if [ ! -b /dev/mapper/sdaX_crypt ]; then
sudo cryptsetup luksOpen /dev/sdaX sdaX_crypt
fi
Although the solution I described above works for me, is there a good reason I should switch to the latter one or it doesn't matter?
cryptsetup status volumeName
echo $? # Exit status should be 0 (success).
If you want to avoid displaying cryptsetup output, you can redirect it to /dev/null.
cryptsetup status volumeName > /dev/null
echo $? # Exit status should be 0 (success).
This is a snippet from a script I wrote last night to take daily snapshots.
DEVICE=/dev/sdx
HEADER=/root/luks/arch/sdx-header.img
KEY_FILE=/root/luks/arch/sdx-key.bin
VOLUME=luksHome
MOUNTPOINT=/home
SUBVOLUME=#home
# Ensure encrypted device is active.
cryptsetup status "${VOLUME}" > /dev/null
IS_ACTIVE=$?
while [[ $IS_ACTIVE -ne 0 ]]; do
printf "Volume '%s' does not seem to be active. Activate? [y/N]\n" $VOLUME
read -N 1 -r -s
if [[ $REPLY =~ ^[Yy]$ ]]; then
cryptsetup open --header="${HEADER}" --key-file="${KEY_FILE}" "${DEVICE}" "${VOLUME}"
IS_ACTIVE=$?
else
exit 0
fi
done
I have written a small bash (4) script to backup shares from my windows pc's. Currently I backup only one share and the backup is only visible to root. Please give me some hints for improvements of that piece of code:
#!/bin/bash
# Script to back up directories on several windows machines
# Permissions, owners, etc. are preserved (-av option)
# Only differences are submitted
# Execute this script from where you want
# Make sure only root can run our script
if [ "$(id -u)" != "0" ]; then
echo "This script must be run as root" 1>&2
exit 1
fi
# Specify the current date for the log-file name
current_date=$(date +%Y-%m-%d_%H%M%S)
# Specify the path to a list of file patterns not included in backup
script_path=$(dirname $(readlink -f $0))
rsync_excludes=$script_path/rsync_exclude.patterns
# Specify mount/rsync options
credential_file="/root/.smbcredentials"
# Specify windows shares
smb_shares=( //192.168.0.100/myFiles )
# Specify the last path component of the directory name to backup shares
# content into
smb_share_ids=( "blacksmith" )
# Specify with trailing '/' to transfer only the dir content
rsync_src="/mnt/smb_backup_mount_point/"
rsync_dst_root=(~/BACKUPS)
# Check if all arrays have the same size
if [ "${#smb_shares[#]}" -ne "${#smb_share_ids[#]}" ]; then
echo "Please specify one id for each samba share!"
exit 1
fi
# Run foor loop to sync all specified shares
for (( i = 0 ; i < ${#smb_shares[#]} ; i++ ))
do
# Check if mount point already exists
echo -n "Checking if mount point exists ... "
if [ -d $rsync_src ]; then
echo "Exists, exit!"
exit 1
else
echo "No, create it"
mkdir $rsync_src
fi
# Try to mount share and perform rsync in case of success
echo -n "Try to mount ${smb_shares[$i]} to $rsync_src ... "
mount -t cifs ${smb_shares[$i]} $rsync_src -o credentials=/root/.smbcredentials,iocharset=utf8,uid=0,file_mode=0600,dir_mode=0600
if [ "$?" -eq "0" ]; then
echo "Success"
# Specify the log-file name
rsync_logfile="$rsync_dst_root/BUP_${smb_share_ids[$i]}_$current_date.log"
# Build rsync destination root
rsync_dst=( $rsync_dst_root"/"${smb_share_ids[$i]} )
# Check if rsync destination root already exists
echo -n "Check if rsync destination root already exists ... "
if [ -d $rsync_dst ]; then
echo "Exists"
else
echo "Does not exist, create it"
mkdir -p $rsync_dst
fi
# Run rsync process
# -av > archieve (preserve owner, permissions, etc.) and verbosity
# --stats > print a set of statistics showing the effectiveness of the rsync algorithm for your files
# --bwlimit=KBPS > transfer rate limit - 0 defines no limit
# --progress > show progress
# --delete > delete files in $DEST that have been deletet in $SOURCE
# --delete-after > delete files at the end of the file transfer on the receiving machine
# --delete-excluded > delete excluded files in $DEST
# --modify-window > files differ first after this modification time
# --log-file > save log file
# --exclude-from > exclude everything from within an exclude file
rsync -av --stats --bwlimit=0 --progress --delete --delete-after --delete-excluded --modify-window=2 --log-file=$rsync_logfile --exclude-from=$rsync_excludes $rsync_src $rsync_dst
fi
# Unmount samba share
echo -n "Unmount $rsync_src ... "
umount $rsync_src
[ "$?" -eq "0" ] && echo "Success"
# Delete mount point
echo -n "Delete $rsync_src ... "
rmdir $rsync_src
[ "$?" -eq "0" ] && echo "Success"
done
Now I need some help concerning following topics:
Checking if conditions like share existence, mount point existence (to make a fool proof script)
The mount command - is it correct, do I give the correct permissions?
Is there a better place for the backup files than the home directory of a user if only root can see that files?
Do you think it would be helpful to integrate the backup of other file systems too?
The backup is rather slow (around 13mb/s) although I have a gigabit network system - possibly this is because of the ssh encryption of rsync? The linux system, where the share is mounted on, has a pci sata controller and an old mainboard, 1gb ram and an athlon xp 2400+. Could there be other reasons for the slowness?
If you have more topics that can be addressed here - be welcome to post them. I'm interested =)
Cheers
-Blackjacx
1) There is a lot of error checking you can do to make this script more robust: check for existance and execute status of external executables (rsync, id, date, dirname, etc.), check the output of rsync (with if [ 0 -ne $?]; then), ping the remote machine to make sure it is on the network, check to make sure thet user you are doing the backup for has a home directory on the local machine, check to make sure the destination directory has enough space for the rsync, etc.
2) mount command looks ok, did you want to try to mount as read only?
3) Depends on the number of users and the size of the backup. For small backups the home directory is likely ok, but if the backups are large, especialy if you might run out of disk space, a dedicated backup location would be nice.
4) Depends on what the purpose of the backup is. In my experience there are five kinds of things that people backup: user data, system data, user configurations, system configurations, entire disks or filesystems. Is there a seperate system task that backs-up the system data and configurations?
5) What other tasks are running while the backup takes place? If the backup is running automated at say 1 AM, there may be other system intensive tasks scheduled for the same time. What is your typical throughput rate for other kinds of data?
You are performing an array check on non-array variables.
Use a variable to specify the remote machine.
# Remote machine
declare remote_machine="192.168.0.100"
# Specify windows shares array
declare -a smb_shares=([0]="//$remote_machine/myFiles"
[1]="//$remote_machine/otherFiles" )
# Specify the last path component of the directory name to backup shares
# content into
declare -a smb_share_ids=( [1]="blacksmith" [2]="tanner")
that way you can
ping -c 1 $remote_machine
if [ 0 -ne $? ] ; then
echo "ping on $remote_machine Failed, exiting..."
exit 1
fi
If rsync_src is only used for this mackup, you might wat to try
mnt_dir="/mnt"
rsync_src=`mktemp -d -p $mnt_dir`
and I would make this dir before the for loop otherwise you are creating and destroying it for every directory you back up.