Checking for availability of network disk on mac - macos

I'm creating myself a script to automate the backing up of certain directories on my mac to an airdisk (usb disk on my airport extreme).
I was reading up about rsync. It seems that if the airdisk isn't mounted, rsync creates the directory in "/Volumes/the name of the disk".
This could fill up my hard drive and it isn't supposed to make the backup on my local drive.
Therefore I want to check if the mounted drive is available before I start the rsync command.
Can anyone help?

I would check to see if a file located in the mount exists. As long as you mount the disk in the same location each time, this should work.
if [ -f /Volumes/AirDisk/foo.txt ];
then
echo "AirDisk mounted. Starting backup"
#Put backup script here
else
echo "File does not exists"
exit 1
fi

Related

Shell script for connecting network drives crashes my MacBook's internet capabilities

My work requires me to connect to several network drives over two different protocols, SMB and SSHFS. I got tired of typing in the commands to connect to them individually and being prompted for my password every time, so I wrote this script:
#!/bin/sh
# SSHFS shares
local_paths=("/Users/$USER/mnt/share_1" "/Users/$USER/mnt/share_2" "/Users/$USER/mnt/share_3")
remote_paths=("$USER#server.university.edu:/home/$USER" "$USER#server.university.edu:/some/path" "$USER#server.university.edu:/another/path")
echo "Enter password:"
read -s password
for i in "${!local_paths[#]}"; do
diskutil unmount ${local_paths[$i]}
echo "Mounting ${remote_paths[$i]} to ${local_paths[$i]}"
mkdir -p ${local_paths[$i]}
sshfs -o password_stdin ${remote_paths[$i]} ${local_paths[$i]} -o volname=$(basename ${local_paths[$i]}) <<< $password
echo
done
# SMB shares
local_paths=("/Users/$USER/mnt/share_4" "/Users/$USER/mnt/share_4")
remote_paths=("//$USER#different.server.university.edu:/home/$USER" "//$USER#different.server.university.edu:/some/path")
for i in "${!local_paths[#]}"; do
diskutil unmount ${local_paths[$i]}
echo "Mounting ${remote_paths[$i]} to ${local_paths[$i]}"
mkdir -p ${local_paths[$i]}
mount_smbfs ${remote_paths[$i]} ${local_paths[$i]}
done
It just loops through every path and disconnects/reconnects. It mostly works. After running it, I gain access to four of the five drives. For some reason, the last SSHFS in the array will mount, but I get a "permission denied" error message when I try to open the folder where it is mounted. If I re-order the array, it is always the last path that will error out like this. I have no such issue with the SMB shares.
Once this error happens, my computer is bugged out. Trying to forcibly unmount the share will just freeze my terminal. I lose all ability to access websites or do anything else that uses a network connection. I can't even restart the computer without holding down the power button for a hard reset.
Technical Specs:
Intel MacBook Pro
MacOS Big Sur
zsh, but I've tried this script in bash and sh with the same result.
Notes:
I tested this on a colleague's laptop and got the same results.

Is there a way through which i can access,use and manipulate files from one server using shell scripting on another server >

I tried accessing files from remote server "10.101.28.83" and manipulating files to create folders on host server where script has been run. But * is the output of echo "$(basename "$file")" command which implies that files are not read from remote server.
#!/bin/bash
#for file in /root/final_migrated_data/*; do
for file in root#10.101.28.83:/root/final_migrated_data/* ; do
echo "$(basename "$file")"
IN="$(basename "$file")"
IFS='_'
read -a addr <<< "$(basename "$file")"
# addr[0] is case_type, addr[1] is case_no, addr[2] is case_year
dir_path="/newdir1";
backup_date="${addr[0]}_${addr[1]}_${addr[2]}";
backup_dir="${dir_path}/${backup_date}";
mkdir -p "${backup_dir}";
cp /root/final_migrated_data/"${backup_date}"_* "${backup_dir}"
done
I expect the output of echo "$(basename "$file")" to be the list of files present at the location /root/final_migrated_data/ of remote server but the actual output is * .
You can use sshfs. As the name suggests, sshfs allows to mount locally (both for reading and writing) a distant filesystem to which you have SSH access. As long as you already know SSH, its usage is very straightforward:
# mount the distant directory on your local machine:
sshfs user#server.com:/my/directory /local/mountpoint
# manipulate the filesystem just like any other (cd, ls, mv, rm…)
# unmount:
umount /local/mountpoint
If you need to mount the same distant filesystem often, you can even add it to your /etc/fstab, refer to the documentation for how to do it.
Note however that using an SSH filesystem is slow, because each operation on the filesystem implies fetching or sending data through the network, and sshfs is not particularly optimized against that (it does not cache file contents, for instance). Other solutions exist, which may be more complex to set up but offer better performance.
See by yourself whether speed is a problem. In your case, if you are simply copying files from one place in your server to another place in your server, it seems rather absurd to make it transit through your home computer and back again. It may be more advantageous to simply run your script on your server directly.

Require Help Writing Bash Script to Compare Mounted Network Mounts MAC OSX

I am trying to write a bash script that will base the outcome depending on what Network Share is mounted,
I have 3 possible network mounts and require 3 different outcomes depending on what is mounted.
I have no experience in writing bash but i am able to fill in the rest once i have the IF statement setup, i just cant find anywhere to help me compare what mounts are mounted
This is what i am looking for
IF [Network Mount 1 is mounted] then
Copy File 1
Else IF [Network Mount 2 is mounted] then
Copy File 2
Else [Network Mount 3 is mounted] Then
Copy File 3
I'm Stuck!
Please Help or guide me.
UPDATE:
So I think i have been comparing the wrong piece of information to achieve my goal,
As i'm hoping to run this script as a login hook i think its better i compare the users SMB Home Directory rather than mount as it wont be mounted when the script runs.
I can get the network home directory with this line of code
dscl . read /Users/$USER SMBHome
I now want to compare this against 3 possible SMB File Servers
FS02/StudentsFolders$
FS03/StaffFolders$
FS03/TeachersFolders$
on a wildcard comparison as the SMB path will include the users name also, so this is what i have so far
#!/bin/bash
UsersHome = dscl . read /Users/$USER SMBHome;
# First Argument # if [[ $UserHome == *"FS02/StudentsFolders$"* ]] then echo "Make Changes to Plist";
# Seconde Argument # else if [[ $UserHome == *"fs03/Stafffolders$"* ]] then echo "Make different Changes to Plist";
# Third Argument # else
Echo "Make Changes to the plist";
I know there will be Syntax Errors Please feel free to point them out.
Is this the correct way to go about my problem?
Mounted volumes are usually found in the /Volumes directory. You might want something like
if [ -d /Volumes/DiskOne ]; then # Is DiskOne mounted?
...
elif [ -d /Volumes/DiskTwo ]; then # Is DiskTwo mounted
...
else
...
fi

Instance of Google Compute Engine freezes trying to upload files on Google Cloud Storage

I have wrote this shell script that download archives from a url list, decompresses them and finally moves them in a Cloud Storage bucket.
#!/bin/bash
# declare STRING variable
for iurl in $(cat ./html-rdfa.list); do
filename=$(basename "$iurl")
file="${filename%.*}"
if gsutil ls gs://rdfa/$file; then
echo "yes"
else
wget $iurl
gunzip $filename
gsutil cp -n $file gs://rdfa
rm $file
sleep 2
fi
done
html-rdfa.list contains the url list. The instance is created using the debian 7 image provided by gooogle.
The script run correctly for the first 5 or 6 files, but then the instance freezes and i have to delete the instance. The ram or the disk of the instance are not full when it freezes.
I think the problem is caused by the command gsutil cp, but it is strange that CPU load is practically 0 and also the RAM is free but it is impossibilo to use the instance without restarting them.
Are you writing the temporary files to the default 10GB root disk? If so, you may be running into the Persistent Disk throughput caps. To see if this is the case, create a new Persistent Disk, then mount it as a data disk and use that disk for the temporary files. Consider starting with ~200GB disk and see if that is enough throughput for your workload. Also, see the docs on Persistent Disk performance.

How to check for usb device with if statement in bash

I'm attempting to create an automated bash script that fills up a file with urandom in the unit's flash storage. I can manually use all of the commands to make this happen, but I'm trying to create a script and having difficulty figuring out how to check for the usb device. I know that it will be either sda1 or sdb1, but not sure if the code below is sufficient ...? Thanks! Below, is the code:
if /dev/sda1
then
mount -t vfat /dev/sda1 /media/usbkey
else
mount -t vfat /dev/sdb1 /media/usbkey
fi
The way I script mountable drives is to first put a file on the drive, e.g. "Iamthemountabledrive.txt", then check for the existence of that file. If it isn't there, then I mount the drive. I use this technique to make sure an audio server is mounted for a 5 radio station network, checking every minute in case there is a network interrupt event.
testfile="/dev/usbdrive/Iamthedrive.txt"
if [ -e "$testfile" ]
then
echo "drive is mounted."
fi
You can mount by label or UUID and hence reduce the complexity of your script. For example if your flash storage has label MYLABEL (you can set and display VFAT labels using mtools' mlabel):
$ sudo mount LABEL=MYLABEL /media/usbkey

Resources