recursive rm -rf of a symlink ? - symlink

I think I know the answer to this one but I'll ask anyway.
I created a symlink to a dir on a different disk. A script (which I have no control over) will "rm -rf *" in the dir that has this symlink. It deletes the symlink OK, but leaves the target dir on the other directory. I expected this but want to make sure that there's no way I can create the symlink somehow to behave like a hard link in terms of making "rm -rf " recursively delete the dir on the other disk. -T looked kind of promising, but didn't pan out. Again, I have no control over the rm command execution. But I do create the target dir on the other disk, plus I create the symlink to it..
Thanks in advance!

What you are essentially asking is can a hard link be created across file systems. The answer is, they cannot. This tutorial confirms this:
An important thing to note about hard links is that they only work on the current file system. You can not create a hard link to a file on a different file system. To do that you need to use symbolic links, Section 1.4.3.
As you seem to already understand, removing a softlink will have no effect on the the thing it is linked to. This is true for hardlinks as well.

Related

How to change SYMLINK to SYMLINKD in batch script

We're sharing SYMLINKD files on our git project. It almost works, except git modifies our SYMLINKD files to SYMLINK files when pulled on another machine.
To be clear, on the original machine, symlink is created using the command:
mklink /D Annotations ..\..\submodules\Annotations\Assets
On the original machine, the dir cmd displays:
25/04/2018 09:52 <SYMLINKD> Annotations [..\..\submodules\Annotations\Assets]
After cloning, on the receiving machine, we get
27/04/2018 10:52 <SYMLINK> Annotations [..\..\submodules\Annotations\Assets]
As you might guess, a file target type pointing at a a directory [....\submodules\Annotations\Assets] does not work correctly.
To fix this problem we either need to:
Prevent git from modifying our symlink types.
Fix our symlinks with batch script triggered on a githook
We're going we 2, since we do not want to require all users to use a modified version of git.
My limited knowledge of batch scripting is impeding me. So far, I have looked into simply modifying the attrib of the file, using the info here:
How to get attributes of a file using batch file and https://superuser.com/questions/653951/how-to-remove-read-only-attribute-recursively-on-windows-7.
Can anyone suggest what attrib commands I need to modify the symlink?
Alternatively, I realise I can delete and recreate the symlink, but how do I get the target directory for the existing symlink short of using the dir command and parsing the path from the output?
I think it's https://github.com/git-for-windows/git/issues/1646.
To be more clear: your question appears to be a manifestation of the XY problem: the Git instance used to clone/fetch the project appears to incorrectly treat symbolic links to directories—creating symbolic links pointing to files instead. So it appears to be a bug in GfW, so instead of digging it up you've invented a workaround and ask how to make it work.
So, I'd better try help GfW maintainer and whoever reported #1646 to fix the problem. If you need a stop-gap solution, I'd say a proper way would be to go another route and script several calls to git ls-tree to figure out what the directory symlinks are (they'd have a special set of permission bits;
you may start here).
So you would traverse all the tree objects of the HEAD commit, recursively,
figuring out what the symlinks pointing at directories are and then
fixup the matching entries in the work tree by deleting them
and recreating with mklink /D or whatever creates a correct sort of
symlink.
Unfortunately, I'm afraid trying to script this using lame possibilities
of cmd.exe-s scripting facilities would be an exercise in futility.
I'd take some more "real" programming language (PowerShell as an example,
and—since you're probably a Windows shop—even a .NET would be OK).

Executing a command from the current directory without dot slash like "./command"

I feel like I'm missing something very basic so apologies if this question is obtuse. I've been struggling with this problem for as long as I've been using the bash shell.
Say I have a structure like this:
├──bin
├──command (executable)
This will execute:
$ bin/command
then I symlink bin/command to the project root
$ ln -s bin/command c
like so
├──c (symlink to bin/command)
├──bin
├──command (executable)
I can't do the following (errors with -bash: c: command not found)
$ c
I must do?
$ ./c
What's going on here? — is it possible to execute a command from the current directory without preceding it with ./ and also without using a system wide alias? It would be very convenient for distributed executables and utility scripts to give them one letter folder specific shortcuts on a per project basis.
It's not a matter of bash not allowing execution from the current directory, but rather, you haven't added the current directory to your list of directories to execute from.
export PATH=".:$PATH"
$ c
$
This can be a security risk, however, because if the directory contains files which you don't trust or know where they came from, a file existing in the currently directory could be confused with a system command.
For example, say the current directory is called "foo" and your colleague asks you to go into "foo" and set the permissions of "bar" to 755. As root, you run "chmod foo 755"
You assume chmod really is chmod, but if there is a file named chmod in the current directory and your colleague put it there, chmod is really a program he wrote and you are running it as root. Perhaps "chmod" resets the root password on the box or something else dangerous.
Therefore, the standard is to limit command executions which don't specify a directory to a set of explicitly trusted directories.
Beware that the accepted answer introduces a serious vulnerability!
You might add the current directory to your PATH but not at the beginning of it. That would be a very risky setting.
There are still possible vulnerabilities when the current directory is at the end but far less so this is what I would suggest:
PATH="$PATH":.
Here, the current directory is only searched after every directory already present in the PATH is explored so the risk to have an existing command overloaded by an hostile one is no more present. There is still a risk for an uninstalled command or a typo to be exploited, but it is much lower. Just make sure the dot is always at the end of the PATH when you add new directories in it.
You could add . to your PATH. (See kamituel's answer for details)
Also there is ~/.local/bin for user specific binaries on many distros.
What you can do is add the current dir (.) to the $PATH:
export PATH=.:$PATH
But this can pose a security issue, so be aware of that. See this ServerFault answer on why it's not so good idea, especially for the root account.

Moving files between users from shell in Mountain Lion

I am working on a group of Bash shell scripts and have one of the scripts check if an update is needed. If so, it needs to copy files from my computer to others. In Snow Leopard I can just do something like.
account=$(whoami)
cp "/Users/Sleepykrooks/Library/Services/Program" "/Users/$account/Library/Services/Program"
But with Mountain Lion, even though the full path still would look like this, using the same thing leads to an error of not finding the folder or file it's looking for. However it does work if you use something like.
cp "/Library/Services/Program" "/Library/Services/Program"
This is where I am unsure how to use my path to copy my updated files to another user's path.
Thank you for the help.
When you copy a folder in Unix, you usually need the -R flag. (See the cp manpage).
cp -R "/Users/Sleepykrooks/Library/Services/Program" "/Users/$account/Library/Services/Program"
Or, using the BASH shortcut of ~ for home directory:
cp -R ~Sleepykrooks/Library/Services/Program ~/Library/Services/Program
Normally, you should not be touching the /Library directory or the user's ~/Library directory on the Mac. And, you should never touch anything under /System unless you can stare into a TV camera and say in an absolutely serious tone "I'm a professional. Don't try this at home."

Rsync create symbolic links only

I currently have rsync working well. It copies all my files from one directory to another directory. The only thing is it is physically copying the files.
I have a lot of large files that I don't want to have a duplicate of all the files. I just want to create a symbolic link in the new directory so that I can serve the data on a webpage. The source directory has some scripts and files I don't want the public to see. I'm moving the safe data to the web root (destination).
What I would like rsync to do is any new files in the source directory would create links into the destination. That way I am not using up my hard drive space like I currently am doing. What I have works perfect except for doing the symbolic link aspect to it. Is there a way to have rsync track and create symbolic links?
rsync -aP --exclude="file.sql" --exclude="*~" --exclude=".*" --exclude="*.sh" . ${destination}
It's not a symlink, but you might be able to work with --link-dest=DIR. It creates a hard link which will create a new name for the same file. This will behave similarly to a softlink as long as:
Both files are on the same filesystem
You don't plan to delete the original and not the copy (the symlink would break but a hard-link won't)
You don't have anything explicitly checking to see if it's a softlink
You could use cp -aR -s (Linux or FreeBSD) or cp --archive --recursive --symbolic-link (Linux) to create symbolic links to the source files in the destination directory instead of copies. Note that -s is non-standard.
Can lndir be useful to you. According to manual it creates a shadow directory of symbolic links to another directory tree.
I think master_delivery is probably the best tool for this. With the already introduced --link-dest option of rsync, files which are not the same will be copied. If you don't mind the situation where copies and hardlinks are mixed, you can use rsync, but if you want to eliminate duplicates completely, use master_delivery.
Usage is:
gem install master_delivery
master_delivery -m <path_to_master> -d <path_to_delivery_root>

Symbolic Links in Ubuntu Recursively link to files in one directory to another

After searching stackoverflow and Google for the past hour I thought I would ask. If the title does not make sense here is what I am looking to achieve.
/var/www/xxx/
Say there are files in this above directory.
/var/www/yyy/
I want the files found in directory xxx to be symbolically linked within directory yyy.
I cannot figure out how to get the symbolic links to work as such:
/var/www/yyy/filefromfolderxxx.html
as opposed to what I keep getting:
/var/www/yyy/xxx/filefromfolderxxx.html
Any help would be greatly appreciated.
Try this:
cd /var/www/xxx
for a in * ; do ln -s /var/www/xxx/$a /var/www/yyy/$a ; done
This will symlink all the files one-by-one.
It's a bit messy, though. If you have multiple sites sitting on the same codebase but requiring different configuration files, you should really teach your framework how co-ordinate that for you. It's not really difficult, but does require more thinking than I can spare for this reply, I'm sorry.
Just use
ln -s /var/www/xxx/* /var/www/yyy
This tells ln: create a symlink for each file in xxx in the folder yyy. No need for for loops.

Resources