in case I accidentally modify/delete important documents, my linux PC makes daily backups with a script that gets executed by cron and contains the following line.
rsync --checksum --recursive ${source} ${dest}/$i --link-dest=${dest}/$((i-1))
(${source} ist the path of the documents folder, ${dest}/n is the path of the n-th backup.)
Using the --link-dest option has the great advantage, that if you backup a 3 GB Folder, change on small file and backup again, both backups combined need 3 GB disk space, instead of 6 GB if I would run rsync without the --link-dest option.
I'm struggling to write a similar script for windows: I could just use the cp -r powershell command (or the xcopy cmd command) but this command does not have an option that is similar to rsync's --link-dest option. Using the linux subsystem for windows for the rsync command works, but scripts in the cron.daily folder inside the linux subsystem for windows do net get executed daily.
TLDR: What is the windows equivalent of rsync -r pathA pathB --link-dest pathC
PS: In case anyone wants the linux version of the script for his own backups, here it is:
#!/bin/bash
source=/home/username/documents
dest=/myBackup
if [ "$1" == "--install" ] ; then
echo "installing..."
cp $0 /etc/cron.daily/myBackupScript
mkdir $dest
echo "installed"
exit 0
fi
for i in {0..9999}; do
if [ ! -e ${dest}/$i ]; then
echo "Copying to " ${dest}/$i
if [ -d ${dest}/$((i-1)) ]; then
rsync --checksum --recursive ${source} ${dest}/$i --link-dest=${dest}/$((i-1))
else
rsync --checksum --recursive ${source} ${dest}/$i
fi
DATE=`date +%Y-%m-%d__%H:%M:%S`
touch ${dest}/$i/$DATE
exit 0
fi
done
echo "unable to do backup"
exit 4
The current rsync version (3.2.2) from the MSYS2 collection for Windows (install: pacman -S rsync), supports the --link-dest hardlink re-use option correctly on NTFS. It also supports NTFS unicode filenames now.
Absolute paths have to be given in MSYS / Cygwin convention - e.g. /C/path/to/source/.
Note: So far (2021-02) the MSYS2 rsync cannot create / replicate symbolic links in the destination using any of the symlink options. It would create content copies instead. Yet it can detect and exclude symlinks in the source.
Related
I am running numerous simulations on a remote server (via ssh). The outcomes of these simulations are stored as .tar archives in an archive directory on this remote server.
What I would like to do, is write a bash script which connects to the remote server via ssh and extracts the required output files from each .tar archive into separate folders on my local hard drive.
These folders should have the same name as the .tar file from which the files come (To give an example, say the output of simulation 1 is stored in the archive S1.tar on the remote server, I want all '.dat' and '.def' files within this .tar archive to be extracted to a directory S1 on my local drive).
For the extraction itself, I was trying:
for f in *.tar; do
(
mkdir ../${f%.tar}
tar -x -f "$f" -C ../${f%.tar} "*.dat" "*.def"
)
done
wait
Every .tar file is around 1GB and there is a lot of them. So downloading everything takes too much time, which is why I only want to extract the necessary files (see the extensions in the code above).
Now the code works perfectly when I have the .tar files on my local drive. However, what I can't figure out is how I can do it without first having to download all the .tar archives from the server.
When I first connect to the remote server via ssh username#host, then the terminal stops with the script and just connects to the server.
Btw I am doing this in VS Code and running the script through terminal on my MacBook.
I hope I have described it clear enough. Thanks for the help!
Stream the results of tar back with filenames via SSH
To get the data you wish to retrieve from .tar files, you'll need to pass the results of tar to a string of commands with the --to-command option. In the example below, we'll run three commands.
# Send the files name back to your shell
echo $TAR_FILENAME
# Send the contents of the file back
cat /dev/stdin
# Send EOF (Ctrl+d) back (note: since we're already in a $'' we don't use the $ again)
echo '\004'
Once the information is captured in your shell, we can start to process the data. This is a three-step process.
Get the file's name
note that, in this code, we aren't handling directories at all (simply stripping them away; i.e. dir/1.dat -> 1.dat)
you can write code to create directories for the file by replacing the forward slashes / with spaces and iterating over each directory name but that seems out-of-scope for this.
Check for the EOF (end-of-file)
Add content to file
# Get the files via ssh and tar
files=$(ssh -n <user#server> $'tar -xf <tar-file> --wildcards \'*\' --to-command=$\'echo $TAR_FILENAME; cat /dev/stdin; echo \'\004\'\'')
# Keeps track of what state we're in (filename or content)
state="filename"
filename=""
# Each line is one of these:
# - file's name
# - file's data
# - EOF
while read line; do
if [[ $state == "filename" ]]; then
filename=${line/*\//}
touch $filename
echo "Copying: $filename"
state="content"
elif [[ $state == "content" ]]; then
# look for EOF (ctrl+d)
if [[ $line == $'\004' ]]; then
filename=""
state="filename"
else
# append data to file
echo $line >> <output-folder>/$filename
fi
fi
# Double quotes here are very important
done < <(echo -e "$files")
Alternative: tar + scp
If the above example seems overly complex for what it's doing, it is. An alternative that touches the disk more and requires to separate ssh connections would be to extract the files you need from your .tar file to a folder and scp that folder back to your workstation.
ssh -n <username>#<server> 'mkdir output/; tar -C output/ -xf <tar-file> --wildcards *.dat *.def'
scp -r <username>#<server>:output/ ./
The breakdown
First, we'll make a place to keep our outputted files. You can skip this if you already know the folder they'll be in.
mkdir output/
Then, we'll extract the matching files to this folder we created (if you don't want them to be in a different folder remove the -C output/ option).
tar -C output/ -xf <tar-file> --wildcards *.dat *.def
Lastly, now that we're running commands on our machine again, we can run scp to reconnect to the remote machine and pull the files back.
scp -r <username>#<server>:output/ ./
Description
I want to copy all files ending on .jpg from my local machine to the remote machine with scp.
For this i have a small "script". It looks like this:
#!/bin/bash
xfce4-terminal -e "scp -r -v /path/to/local/folder/*.jpg <user>#<IP>:/var/path/to/remote/folder/" --hold
Problem
When i open a terminal and enter scp -r -v /path/to/local/folder/*.jpg <user>#<IP>:/var/path/to/remote/directory/ it works.
So SSH is working correct.
When i start the script it doesnt.
The script works, when i copy the whole local folder. It then looks like this (simply the *.jpg is removed):
#!/bin/bash
xfce4-terminal -e "scp -r -v /path/to/local/folder/ <user>#<IP>:/var/path/to/remote/folder/" --hold
But then i have the local folder inside the remote folder, where i only want to have the files.
I dont know, if it is important but currently i use a computer with Linux Mint 19.3, xfce terminal and zshell.
Question
So how do i run a script correctly that copys files from a local folder to remote folder?
It's the shell who expands the wildcard, but when you run -e in xfce4-terminal, it runs the command without letting the shell parse it. You can run a shell to run the command, though:
xfce4-terminal -e "bash -c 'scp -r -v /path/to/local/folder/*.jpg user#ip:/var/path/to/remote'" --hold
Are you sure you need the -r? Directories are usually not named .jpg.
First, I want to share my experience about how to make a USB pendrive of Ubuntu live iso, which is multiboot and it can duplicate itself by a bash code. I am trying to guide you to make something like that, then, as long as I'm not an expert, asking how can I make it faster(while booting, operating or cloning)?
First of all, you should partition your usb flash driver to two partitions by some tools like GParted. One fat32 partition and the other ext2 with the fix size of 5500MB(if you change its size then you have to change this number in the bash code too). You can find the size of the first partition by the whole size of your usb flash drive minus the size of second partition.
Second, you must download ubuntu iso image(I downloaded lubuntu 13.10 because it's faster, but I think ubuntu must work too) then copy it in the first partition(the fat32 partition.) and rename it to ubuntu.iso.
Third, run this command to install grub bootloader(you can find this command in the bash code too)
sudo grub-install --force --no-floppy --boot-directory=/mnt/usb2/boot /dev/sdc1
"/mnt/usb2" directory is the one that you mounted the first partition and /dev/sdc1 is its device. If you don't know about this information just use fdisk -l or Menu->Preferences->Disks to find out. Then copy the following files in their mentioned directories and reboot to usb flash(for my motherboard by pushing F12 then selecting my flash device from the "HDD Hard" list .)
/path to the first partition/boot/grub/grub.cfg
set timeout=10
set default=0
menuentry "Run Ubuntu Live ISO Persistent" {
loopback loop /ubuntu.iso
linux (loop)/casper/vmlinuz persistent boot=casper iso-scan/filename=/ubuntu.iso noeject noprompt splash --
initrd (loop)/casper/initrd.lz
}
menuentry "Run Ubuntu Live ISO(for clone to a new USB drive)" {
loopback loop /ubuntu.iso
linux (loop)/casper/vmlinuz boot=casper iso-scan/filename=/ubuntu.iso noeject noprompt splash --
initrd (loop)/casper/initrd.lz
}
the bash code:
/path to the first partition/boot/liveusb-installer
#!/bin/bash
destUSB=$1
# insert mountpoint, receive device name
get_block_from_mount() {
dir=$(readlink -f $1)
MOUNTNAME=`echo $dir | sed 's/\\/$//'`
if [ "$MOUNTNAME" = "" ] ; then
echo ""
return 1
fi
BLOCK_DEVICE=`mount | grep "$MOUNTNAME " | cut -f 1 -d " "`
if [ "$BLOCK_DEVICE" = "" ] ; then
echo ""
return 2
fi
echo $BLOCK_DEVICE
return 0
}
sdrive=$(echo $destUSB | sed 's/\/dev\///')
if ! [ -f /sys/block/$sdrive/capability ] || ! [ $(($(< /sys/block/$sdrive/capability )&1)) -ne 0 ]
then
echo "Error: The argument must be the destination usb in /dev directory!"
echo "If you don't know this information just try 'sudo fdisk -l' or use Menu->Prefrences->Disks"
exit 1
fi
srcDirectory=/isodevice
srcDev=`get_block_from_mount $srcDirectory`
srcUSB="${srcDev%?}"
if [ $srcUSB == $destUSB ]; then
echo "Error: The argument of device is wrong! It's the source USB drive."
exit 1
fi
diskinfo=`sudo parted -s $destUSB print`
echo "$diskinfo"
# Find size of disk
v_disk=$(echo "$diskinfo"|awk '/^Disk/ {print $3}'|sed 's/[Mm][Bb]//')
second_disk=5500
if [ "$v_disk" -lt "6500" ]; then
echo "Error: the disk is too small!!"
exit 1
elif [ "$v_disk" -gt "65000" ]; then
echo "Error: the disk is too big!!"
exit 1
fi
echo "Partitioning ."
# Remove each partition
for v_partition in $(echo "$diskinfo" |awk '/^ / {print $1}')
do
umount -l ${destUSB}${v_partition}
parted -s $destUSB rm ${v_partition}
done
# Create partitions
let first_disk=$v_disk-$second_disk
parted -s $destUSB mkpart primary fat32 1 ${first_disk}
parted -s $destUSB mkpart primary ext2 ${first_disk} ${v_disk}
echo "Formatting .."
# Format the partition
mkfs.vfat ${destUSB}1
mkfs.ext2 ${destUSB}2 -L home-rw
echo "Install grub into ${destUSB}1 ..."
mkdir /mnt/usb2
mount ${destUSB}1 /mnt/usb2
grub-install --force --no-floppy --boot-directory=/mnt/usb2/boot $destUSB
cp $srcDirectory/boot/grub/grub.cfg /mnt/usb2/boot/grub
cp $srcDirectory/boot/liveusb-installer /mnt/usb2/boot
echo "Copy ubuntu.iso from ${srcUSB}1 to ${destUSB}1......"
cp $srcDirectory/ubuntu.iso /mnt/usb2
umount -l ${destUSB}1
rm -r /mnt/usb2
echo "Copy everything from ${srcUSB}2 to ${destUSB}2 ............"
dd if=${srcUSB}2 of=${destUSB}2
echo "It's done!"
exit 0
So after that if you want to clone this flash, just reboot to the second option of grub boot loader then put another usb flash drive on and run liveusb-installer /dev/sdc. It will make another usb drive with every installed apps from the first one into /dev/sdc drive. I made this code so all of my students have the same flash drive to practice programming with c, python or sage everywhere. The speed of non-persistent (the second option in grub menu) is fine, but the fist option, which is the persistent one, is take 3-4 min to boot and after that its a little bit slow! Also, the installation(duplication) take a half an hour to complete! Is there any improvement to make it faster in any way?
any suggestion will be appreciated
As I said before, if lubuntu boots non-persistent, it will be faster. So I infer that if I just keep home directory persistent then the rest of folders of root directory would be in the RAM, then it must be faster. For achieving this, I changed it a little bit to boot with /home persistent and install every application after each boot, automatically. It's turned out in this way the boot time doesn't changed(booting + installing) but operating is so faster, which is great for me.
I didn't change grub.cfg at all. I changed the bash code(liveusb-installer) to label the second partition to home-rw, so the rest of folders just stay in RAM.
In the bash code: /path to the first partition/boot/liveusb-installer, just change the line of mkfs.ext2 ${destUSB}2 -L casper-rw to mkfs.ext2 ${destUSB}2 -L home-rw.
After changing liveusb-installer you can use that when you want to clone this USB drive. If you installed it before(by using above recipes) then just go to second option of grub menu(the non-persistent one) then format the second partition and label it to home-rw. After that just reboot to first option of grub menu, then become online and install any program that you wish to be there always.
sudo apt-get update
sudo apt-get install blablabla
After installing, copy every packages and lists to ~/apt directory by running these commands:
mkdir ~/apt
mkdir ~/apt/lubuntu-archives
mkdir ~/apt/lubuntu-lists
cp /var/cache/apt/archives/*.deb ~/apt/lubuntu-archives
cp /var/lib/apt/lists/*ubuntu* ~/apt/lubuntu-lists
Now copy following files in ~/apt directory
/home/lubuntu/apt/start-up
#!/bin/bash
apt_dir=/home/lubuntu/apt
# This script meant to open by /home/lubuntu/apt/autostart
for file in $(ls $apt_dir/lubuntu-archives)
do
ln -s $apt_dir/lubuntu-archives/$file /var/cache/apt/archives/$file
done
for file in $(ls $apt_dir/lubuntu-lists)
do
ln -s $apt_dir/lubuntu-lists/$file /var/lib/apt/lists/$file
done
apt-get install -y binutils gcc g++ make m4 perl tar \
vim codeblocks default-jre synapse
exit 0
Also change the above packages to blablabla of the install command.
/home/lubuntu/apt/autostart
#!/bin/bash
# This script meant to open by /home/lubuntu/.config/lxsession/Lubuntu/autostart
# or autostart of "Menu->Perferences->Default applications for LXSession"
xterm -e /usr/bin/sudo /bin/bash /home/lubuntu/apt/start-up
synapse
Then edit this file /home/lubuntu/.config/lxsession/Lubuntu/autostart and add the address of above file into it. Like this:
/home/lubuntu/apt/autostart
Now after each reboot a nice terminal will be opened and all the packages will install as I wish! The advantage of this method over persistent root directory is the much faster operation, for instance, opening windows or running programs. But the time of cloning and booting are steal long. I will be so glad that anybody helps me to make it more professional and faster.
I am using make and tar to backup. When executing makefile, tar command shows file changed as we read it. In this case,
the tar package is ok when the warning comes up
but it stops the tar command for the following backup
the file showing the warning in fact doesn't change -- it is really strange that the warning comes up
the files showing the warning come up randomly, I mean, everytime I run my makefile, the files showing the warning are different
--ignore-failed-read doesn't help. I am using tar 1.23 in MinGW
I just changed my computer to WIN7 64 bit. The script works well in old WIN7 32 bit. But the tar version is not as new as the 1.23.
How can I stop the tar's warning to stop my backup following the warning?
Edit-2: it might be the reason
As I said above, the bash shell script worked well in my old computer. Comparing with the old computer, the msys version is different. So is the version of tar command. In the old computer, tar is 1.13.19 and it is 1.23 in the new computer. I copied the old tar command without copying its dependency msys-1.0.dll to the new computer and renamed it tar_old. And I also updated the tar command in the shell script and run the script. Then everything is ok. So, it seemed that the problem is the tar command. I am sure that there is no any file changed when taring. Is it a bug for tar command in new version? I don't know.
Edit-1: add more details
The backup is invoked by a bash shell script. It scans the target directory and builds makefile then invokes make to use tar command for backup. Followed is a typical makefile built by the bash shell script.
#--------------------------------------------
# backup VC
#--------------------------------------------
# the program for packing
PACK_TOOL=tar
# the option for packing tool
PACK_OPTION=cjvf
# M$: C driver
WIN_C_DIR=c:
# M$: D driver
WIN_D_DIR=d:
# M$: where the software is
WIN_PRG_DIR=wuyu/tools
# WIN_PRG_DIR=
# where to save the backup files
BAKDIR=/home/Wu.Y/MS_bak_MSYS
VC_FRAMEWORK=/home/Wu.Y/MS_bak_MSYS/tools/VC/VC_framework.tar.bz2
VC_2010=/home/Wu.Y/MS_bak_MSYS/tools/VC/VC_2010.tar.bz2
.PHONY: all
all: $(VC_FRAMEWORK) $(VC_2010)
$(VC_FRAMEWORK): $(WIN_C_DIR)/$(WIN_PRG_DIR)/VC/Framework/*
#$(PACK_TOOL) $(PACK_OPTION) "$#" --ignore-failed-read /c/$(WIN_PRG_DIR)/VC/Framework
$(VC_2010): $(WIN_C_DIR)/$(WIN_PRG_DIR)/VC/VS2010/*
#$(PACK_TOOL) $(PACK_OPTION) "$#" --ignore-failed-read /c/$(WIN_PRG_DIR)/VC/VS2010
As you can see, the tar package is stored in ~/MS_bak_MSYS/tools/VC/VC_2010.tar.bz2. I run the script in ~/qqaa. ~/MS_bak_MSYS is excluded from tar command. So, the tar file I am creating is not inside a directory I am trying to put into tar file. This is why I felt it strange that the warning came up.
I also encounter the tar messages "changed as we read it". For me these message occurred when I was making tar file of Linux file system in bitbake build environment. This error was sporadic.
For me this was not due to creating tar file from the same directory. I am assuming there is actually some file overwritten or changed during tar file creation.
The message is a warning and it still creates the tar file. We can still suppress these warning message by setting option
--warning=no-file-changed
(http://www.gnu.org/software/tar/manual/html_section/warnings.html
)
Still the exit code return by the tar is "1" in warning message case:
http://www.gnu.org/software/tar/manual/html_section/Synopsis.html
So if we are calling the tar file from some function in scripts, we can handle the exit code something like this:
set +e
tar -czf sample.tar.gz dir1 dir2
exitcode=$?
if [ "$exitcode" != "1" ] && [ "$exitcode" != "0" ]; then
exit $exitcode
fi
set -e
Although its very late but I recently had the same issue.
Issue is because dir . is changing as xyz.tar.gz is created after running the command. There are two solutions:
Solution 1:
tar will not mind if the archive is created in any directory inside .. There can be reasons why can't create the archive outside the work space. Worked around it by creating a temporary directory for putting the archive as:
mkdir artefacts
tar -zcvf artefacts/archive.tar.gz --exclude=./artefacts .
echo $?
0
Solution 2:
This one I like. create the archive file before running tar:
touch archive.tar.gz
tar --exclude=archive.tar.gz -zcvf archive.tar.gz .
echo $?
0
If you want help debugging a problem like this you need to provide the make rule or at least the tar command you invoked. How can we see what's wrong with the command if there's no command to see?
However, 99% of the time an error like this means that you're creating the tar file inside a directory that you're trying to put into the tar file. So, when tar tries to read the directory it finds the tar file as a member of the directory, starts to read it and write it out to the tar file, and so between the time it starts to read the tar file and when it finishes reading the tar file, the tar file has changed.
So for example something like:
tar cf ./foo.tar .
There's no way to "stop" this, because it's not wrong. Just put your tar file somewhere else when you create it, or find another way (using --exclude or whatever) to omit the tar file.
Here is a one-liner for ignoring the tar exit status if it is 1. There is no need to set +e as in sandeep's script. If the tar exit status is 0 or 1, this one-liner will return with exit status 0. Otherwise it will return with exit status 1. This is different from sandeep's script where the original exit status value is preserved if it is different from 1.
tar -czf sample.tar.gz dir1 dir2 || [[ $? -eq 1 ]]
To enhance Fabian's one-liner; let us say that we want to ignore only exit status 1 but to preserve the exit status if it is anything else:
tar -czf sample.tar.gz dir1 dir2 || ( export ret=$?; [[ $ret -eq 1 ]] || exit "$ret" )
This does everything sandeep's script does, on one line.
Simply using an outer directory for the output, solved the problem for me.
sudo tar czf ./../31OCT18.tar.gz ./
Exit codes for tar are restricted, so you don't get to much information.
You can assume that ec=1 is safe to ignore, but it might trip - i.e. the gzip-example in other posts (exit code from external programs).
The reason for the file changed as we read it error/warning can be varying.
A log file inside the directory.
Writing to a tar file in the same directory you are trying to back up.
etc.
Possible workarounds can involve:
exclude known files (log files, tar-files, etc)
ensure log files are written to other directories
This can be quite involved, so you might want to still just run the tar command and preferably safely ignore some errors / warnings.
To do this you will have to:
Save the tar output.
Save the exit code
Check the output against known warnings and errors, not unlike tar's own ignore.
Conditionally pass another exit code to the next program in the pipe.
In OP's case this would have to be wrapped in a script and run as PACK_TOOL.
# List of errors and warnings from "tar" which we will safely ignore.
# Adapt to your findings and needs
IGNORE_ERROR="^tar:.*(Removing leading|socket ignored|file changed as we read it)"
# Save stderr from "tar"
RET=$(tar zcf $BACKUP --exclude Cache --exclude output.log --exclude "*cron*sysout*" $DIR 2>&1)
EC=$? # Save "tar's" exit code
echo "$RET"
if [ $EC -ne 0 ]
then
# Check the RET output, remove (grep -v) any errors / warning you wish to ignore
REAL_ERRORS=$(echo "$RET" | grep "^tar: " | grep -Ev "${IGNORE_ERROR:?}")
# If there is any output left you actually got an error to check
if [ -n "$REAL_ERRORS" ]
then
echo "ERROR during backup of ${DIR:?} to ${BACKUP:?}"
else
echo "OK backup of (warnings ignored) ${DIR:?}"
EC=0
fi
else
echo "OK backup of ${DIR:?}"
fi
It worked for me by adding a simple sleep timeout of 20 sec.
This might happen if your source directory is still writing. Hence put a sleep so that the backup would finish and then tar should work fine. This also helped me in getting the right exit status.
sleep 20
tar -czf ${DB}.${DATE}.tgz ./${DB}.${DATE}
I am not sure does it suit you but I noticed that tar does not fail on changed/deleted files in pipe mode. See what I mean.
Test script:
#!/usr/bin/env bash
set -ex
tar cpf - ./files | aws s3 cp - s3://my-bucket/files.tar
echo $?
Deleting random files manually...
Output:
+ aws s3 cp - s3://my-bucket/files.tar
+ tar cpf - ./files
tar: ./files/default_images: File removed before we read it
tar: ./files: file changed as we read it
+ echo 0
0
Answer should be very simple: Don't save your tar file while "Taring" it in the same directory.
Just do: tar -cvzf resources/docker/php/php.tar.gz .
Eventually,
it will tar the current directory and save it to another directory.
That's easy peasy, lemon squeezy fellas
I'm trying to store all my profile configuration files (~/.xxx) in git. I'm pretty horrible at bash scripting but I imagine this will be pretty straight forward for you scripting gurus.
Basically, I'd like a script that will create symbolic links in my home directory to files in my repo. Twist is, I'd like it warn and prompt for overwrite if the symlink will be overwriting an actual file. It should also prompt if a sym link is going to be overwritten, but the target path is different.
I don't mind manually editing the script for each link I want to create. I'm more concerned with being able to quickly deploy new config scripts by running this script stored in my repo.
Any ideas?
The ln command is already conservative about erasing, so maybe the KISS approach is good enough for you:
ln -s git-stuff/home/.[!.]* .
If a file or link already exists, you'll get an error message and this link will be skipped.
If you want the files to have a different name in your repository, pass the -n option to ln so that it doesn't accidentally create a symlink in an existing subdirectory of that name:
ln -sn git-stuff/home/profile .profile
...
If you also want to have links in subdirectories of your home directory, cp -as reproduces the directory structure but creates symbolic links for regular files. With the -i option, it prompts if a target already exists.
cp -i -as git-stuff/home/.[!.]* .
(My answer assumes GNU ln and GNU cp, such as you'd find on Linux (and Cygwin) but usually not on other unices.)
The following has race conditions, but it is probably as safe as you can get without filesystem transactions:
# create a symlink at $dest pointing to $source
# not well tested
set -e # abort on errors
if [[ ( -h $dest && $(readlink -n "$dest") != $source ) || -f $dest || -d $dest ]]
then
read -p "Overwrite $dest? " answer
else
answer=y
fi
[[ $answer == y ]] && ln -s -n -f -v -- "$source" "$dest"