Mac OSX mount smb to /Volumes from script - macos

I can mount an smb share to /Volumes using the following
osascript -e "mount volume \"smb://user:pass#hal/share\""
But this only works if I have already logged into the Mac, otherwise I get a "FAILED TO establish the default connection to the WindowServer" error.
I can use the mount command to mount to a folder in my home directory, which works whether or not I have logged in:
mkdir ~/test
mount -t smbfs //user:pass#hal/share ~/test
But I can't do this with /Volumes as it is owned by root. How does the osascript call have the permission to write to a folder owned by root and how can I do the same thing without using AppleScript?
Thank you

Answering my own question:
Originally the AppleScript I was using was:
osascript -e 'tell application "Finder" to mount volume "smb://user:pass#hal/share"'
This gives a different error and the mount fails when the user has not logged:
29:78: execution error: An error of type -610 has occurred. (-610)
I found the simpler version that doesn't use Finder when I was composing this question:
osascript -e ‘mount volume "smb://user:pass#hal/share”’
As I said this also gives an error when the user has not logged in:
_RegisterApplication(), FAILED TO establish the default connection to the WindowServer, _CGSDefaultConnection() is NULL.
BUT it does actually mount the network share in /Volumes, so I can use this and just ignore the error.

Related

Shell script for connecting network drives crashes my MacBook's internet capabilities

My work requires me to connect to several network drives over two different protocols, SMB and SSHFS. I got tired of typing in the commands to connect to them individually and being prompted for my password every time, so I wrote this script:
#!/bin/sh
# SSHFS shares
local_paths=("/Users/$USER/mnt/share_1" "/Users/$USER/mnt/share_2" "/Users/$USER/mnt/share_3")
remote_paths=("$USER#server.university.edu:/home/$USER" "$USER#server.university.edu:/some/path" "$USER#server.university.edu:/another/path")
echo "Enter password:"
read -s password
for i in "${!local_paths[#]}"; do
diskutil unmount ${local_paths[$i]}
echo "Mounting ${remote_paths[$i]} to ${local_paths[$i]}"
mkdir -p ${local_paths[$i]}
sshfs -o password_stdin ${remote_paths[$i]} ${local_paths[$i]} -o volname=$(basename ${local_paths[$i]}) <<< $password
echo
done
# SMB shares
local_paths=("/Users/$USER/mnt/share_4" "/Users/$USER/mnt/share_4")
remote_paths=("//$USER#different.server.university.edu:/home/$USER" "//$USER#different.server.university.edu:/some/path")
for i in "${!local_paths[#]}"; do
diskutil unmount ${local_paths[$i]}
echo "Mounting ${remote_paths[$i]} to ${local_paths[$i]}"
mkdir -p ${local_paths[$i]}
mount_smbfs ${remote_paths[$i]} ${local_paths[$i]}
done
It just loops through every path and disconnects/reconnects. It mostly works. After running it, I gain access to four of the five drives. For some reason, the last SSHFS in the array will mount, but I get a "permission denied" error message when I try to open the folder where it is mounted. If I re-order the array, it is always the last path that will error out like this. I have no such issue with the SMB shares.
Once this error happens, my computer is bugged out. Trying to forcibly unmount the share will just freeze my terminal. I lose all ability to access websites or do anything else that uses a network connection. I can't even restart the computer without holding down the power button for a hard reset.
Technical Specs:
Intel MacBook Pro
MacOS Big Sur
zsh, but I've tried this script in bash and sh with the same result.
Notes:
I tested this on a colleague's laptop and got the same results.

Can I name a VeraCrypt volume on Mac OS?

When Veracrypt 1.23 mounts a volume it is name NO NAME.
Is there a way to give these volumes a name?
I am using the console to create my containers
veracrypt -t -c $LOCATION --encryption=AES --hash=SHA-512 --filesystem=FAT --password=$PASSWORD --size=1G --volume-type=Normal --pim=$PIM --keyfiles=
I tried renaming the volume in /Volumes/NO\ NAME but that just removes the volume from the desktop.
And specifying a mount point.
Enter mount directory [default]: /Users/Test
But the volume still mounts as NO NAME
Using diskutil I can rename volumes, as below.
/usr/sbin/diskutil rename "NO NAME" "TEST2"
I am leaving the question open as this is a bit of a hack.

mosh + osx + /bin/false error

I have successfully installed mosh at server and client side both. I am trying to ssh using mosh from osx but it is throwing following error:
/bin/false: No such file or directory
write: Broken pipe
/usr/local/bin/mosh: Did not find remote IP address (is SSH ProxyCommand disabled?).
I am not sure if it has anything to do with mosh, or it is general error. Please help me in setting up mosh.
This error
/bin/false: No such file or directory
most likely means the user account that you are trying to login to is disabled. You need to log in as another user, and change the shell to a valid executable
$ chsh -s /bin/bash [username]

Cant locate desired local directory while using SCP on Mac

I'm trying to copy a local directory to the root directory of my 1and1 server. I'm on a mac and I've ssh'ed into the server just fine. I looked online and saw numerous examples all along the same lines. I tried:
scp -r ~/Desktop/Projects/resume u67673257#aimhighprogram.com:/
The result in my terminal was:
I'm not sure where Kuden/homepages/29/d401405832/htdocs came from, I thought the ~ would take me to the macbook user directory
Any help would be appreciated, I'm not sure if I'm just missing something simple.
Thanks in advance
To scp, issue the command on your Mac, don't SSH into 1and1.
The error message is telling you that ~/Desktop/Projects/resume is not on the 1and1 server, which you know - because you're working to put it there.
More ...
scp myfile myuser#myserver:~/mypath/myuploadedfile
You would read this as:
scp myfile to myserver, logging in as myuser and place it under the mypath directory of the myuser account, with the name myuploadedfile

rsyncing using cygwin's rsync from the Windows Command Prompt

I am pushing a local file to a folder in a remote location using cygwin's rsync from the Windows Command Prompt.
The below command
D:\My Folder>C:/cygwin/bin/rsync.exe -avh data.csv ec2-user#someserver.com::~"overhere/"
returns the error, "failed to connect to someserver.com : connection timed out"
When I try the following command to place the file in the remote location root folder,
D:\My Folder>C:/cygwin/bin/rsync.exe -avh data.csv ec2-user#someserver.com~
it says "sending incremental file list" but I am not able to find the file in the root folder in the remote location.
What am I doing wrong?
The timeout most likely occurs because there is no rsync daemon running on the server known as someserver.com.
Using :: after the remote host name will cause rsync to try to connect to the rsync daemon running on that machine. If you use : instead, rsync will attempt a shell access to copy your data.
Your second call to rsync.exe succeeds because rsync.exe -avh data.csv c2-user#someserver.com~ creates a copy of data.csv in your current working directory named ec2-user#someserver.com~.
If you use shell access, you can directly provide the path after the :, if using the rsync daemon, you have to provide the module name as configured in /etc/rsyncd.conf on the server after the ::. So in your case it is either ec2-user#someserver.com:~/overhere/ for shell access or ec2-user#someserver.com::MODULE for the daemon.
But as I suspect that you have no rsync daemon running on the remote machine, you'd have to install and configure it first for the second version to work. The first version will do through normal SSH access.
So as a first attempt you can try: D:\My Folder>C:/cygwin/bin/rsync.exe -avh data.csv ec2-user#someserver.com:overhere/
This will create a folder named overhere in the ec2-user's home directory on someserver.com if it doesn't already exist and copy the local data.csv into that directory.

Resources