How to change SYMLINK to SYMLINKD in batch script - windows

We're sharing SYMLINKD files on our git project. It almost works, except git modifies our SYMLINKD files to SYMLINK files when pulled on another machine.
To be clear, on the original machine, symlink is created using the command:
mklink /D Annotations ..\..\submodules\Annotations\Assets
On the original machine, the dir cmd displays:
25/04/2018 09:52 <SYMLINKD> Annotations [..\..\submodules\Annotations\Assets]
After cloning, on the receiving machine, we get
27/04/2018 10:52 <SYMLINK> Annotations [..\..\submodules\Annotations\Assets]
As you might guess, a file target type pointing at a a directory [....\submodules\Annotations\Assets] does not work correctly.
To fix this problem we either need to:
Prevent git from modifying our symlink types.
Fix our symlinks with batch script triggered on a githook
We're going we 2, since we do not want to require all users to use a modified version of git.
My limited knowledge of batch scripting is impeding me. So far, I have looked into simply modifying the attrib of the file, using the info here:
How to get attributes of a file using batch file and https://superuser.com/questions/653951/how-to-remove-read-only-attribute-recursively-on-windows-7.
Can anyone suggest what attrib commands I need to modify the symlink?
Alternatively, I realise I can delete and recreate the symlink, but how do I get the target directory for the existing symlink short of using the dir command and parsing the path from the output?

I think it's https://github.com/git-for-windows/git/issues/1646.
To be more clear: your question appears to be a manifestation of the XY problem: the Git instance used to clone/fetch the project appears to incorrectly treat symbolic links to directories—creating symbolic links pointing to files instead. So it appears to be a bug in GfW, so instead of digging it up you've invented a workaround and ask how to make it work.
So, I'd better try help GfW maintainer and whoever reported #1646 to fix the problem. If you need a stop-gap solution, I'd say a proper way would be to go another route and script several calls to git ls-tree to figure out what the directory symlinks are (they'd have a special set of permission bits;
you may start here).
So you would traverse all the tree objects of the HEAD commit, recursively,
figuring out what the symlinks pointing at directories are and then
fixup the matching entries in the work tree by deleting them
and recreating with mklink /D or whatever creates a correct sort of
symlink.
Unfortunately, I'm afraid trying to script this using lame possibilities
of cmd.exe-s scripting facilities would be an exercise in futility.
I'd take some more "real" programming language (PowerShell as an example,
and—since you're probably a Windows shop—even a .NET would be OK).

Related

Windows - hard links to files in a git repository break often

I maintain a private Git repository with all of my config and dotfiles (.bashrc, profile.ps1, .emacs etc.).
On Windows this repository is stored under C:\git\config. Most applications expect the files to be elsewhere, so I added hard links between the repository and the expected locations.
Example
On Linux .emacs is located in ~/git/config/.emacs but emacs expects it to be at ~/.emacs. I run:
$ sudo ln -s ~/git/config/.emacs ~/.emacs
On Windows my .emacs is located in C:\git\config\.emacs, but emacs expects it to be in C:\users\ayrton\.emacs. I run:
PS> cmd /c mklink /H C:\users\ayrton\.emacs C:\git\config\.emacs
Issue
On Linux this seems to work fine: when I update the original file, the contents of the link update and everything stays in sync.
On Windows, the links break after a period of time and the files become out of sync (the file contents are different).
Why do the links break on Windows? Is there an alternative solution?
I've seen this StackOverflow post: Can't Hard Link the gitconfig File
So I’ve finally found a solution that takes the best of both: put the repo in a subdirectory, and instead of symlinks, add a configuration option for “core.worktree” to be your home directory. Now when you’re in your home directory you’re not in a git repo (so the first problem is gone), and you don’t need to deal with fragile symlinks as in the second case. You still have the minor hassle of excluding paths that you don’t want versioned (eg, the “*” in “.git/info/exclude” trick), but that’s not new.
The problem here is that the expected locations are different on Windows vs. Linux. For example, VSCode expects the user settings to be in:
Linux: $HOME/.config/Code/User/settings.json
Windows: %APPDATA%\Code\User\settings.json
Ideally I would like my repository to be platform independent. If take the core.worktree approach (e.g. make core.worktree be / or C:\, then exclude everything except specific files) I would have to maintain two copies of some configuration files when their absolute paths differ across operating systems.
Hardlinks can break if a editor opens/creates the file as a new blank file each time you save. It would not surprise me if Notepad did this because it reads the entire file into memory and has no need for the original file after it has loaded the file.
You can try to create a file symlink instead of hardlink on Windows.

Executing a command from the current directory without dot slash like "./command"

I feel like I'm missing something very basic so apologies if this question is obtuse. I've been struggling with this problem for as long as I've been using the bash shell.
Say I have a structure like this:
├──bin
├──command (executable)
This will execute:
$ bin/command
then I symlink bin/command to the project root
$ ln -s bin/command c
like so
├──c (symlink to bin/command)
├──bin
├──command (executable)
I can't do the following (errors with -bash: c: command not found)
$ c
I must do?
$ ./c
What's going on here? — is it possible to execute a command from the current directory without preceding it with ./ and also without using a system wide alias? It would be very convenient for distributed executables and utility scripts to give them one letter folder specific shortcuts on a per project basis.
It's not a matter of bash not allowing execution from the current directory, but rather, you haven't added the current directory to your list of directories to execute from.
export PATH=".:$PATH"
$ c
$
This can be a security risk, however, because if the directory contains files which you don't trust or know where they came from, a file existing in the currently directory could be confused with a system command.
For example, say the current directory is called "foo" and your colleague asks you to go into "foo" and set the permissions of "bar" to 755. As root, you run "chmod foo 755"
You assume chmod really is chmod, but if there is a file named chmod in the current directory and your colleague put it there, chmod is really a program he wrote and you are running it as root. Perhaps "chmod" resets the root password on the box or something else dangerous.
Therefore, the standard is to limit command executions which don't specify a directory to a set of explicitly trusted directories.
Beware that the accepted answer introduces a serious vulnerability!
You might add the current directory to your PATH but not at the beginning of it. That would be a very risky setting.
There are still possible vulnerabilities when the current directory is at the end but far less so this is what I would suggest:
PATH="$PATH":.
Here, the current directory is only searched after every directory already present in the PATH is explored so the risk to have an existing command overloaded by an hostile one is no more present. There is still a risk for an uninstalled command or a typo to be exploited, but it is much lower. Just make sure the dot is always at the end of the PATH when you add new directories in it.
You could add . to your PATH. (See kamituel's answer for details)
Also there is ~/.local/bin for user specific binaries on many distros.
What you can do is add the current dir (.) to the $PATH:
export PATH=.:$PATH
But this can pose a security issue, so be aware of that. See this ServerFault answer on why it's not so good idea, especially for the root account.

Copying directories recursively using shell script

Should be an easy question for the gurus here, though it's hard to explain it in text so hopefully this is clear. I've got two directories on a box with some flavor of unix on it. I've got a script that I want to use to move all the files and directories from one location to another.
First, an example of how the directories look:
Directory A: final/results/2012/2012-02/2012-02-25/name/files
Directory B: test/results/2012/2012-02/2012-02-24/name/files
So you see they're very similar. What I want to do is move everything from the Directory B 2012 directory, recursively, to the same level of Directory A. So you'd end up with:
someproject/results/2012/2012-02/2012-02-25/name/files
someproject/results/2012/2012-02/2012-02-24/name/files
etc.
I want this script to be future proof though, meaning I don't want the 2012 hardcoded. Also, towards the end of a month you will potentially have data from two different months and both need to be copied into the 2012 directory. So here is the command I used in the shell script file:
CONS="/someproject";
ROOT="/test";
/bin/cp -r ${ROOT}/results/* ${CONS}/results/*
but this resulted in:
/final/results/2012/2012-02/2012-02-25/name/files
and
/final/results/2012/2012/2012-02/2012-02-24/name/files
So as I hope is clear, it started a level below where I wanted it too. Can anyone fill me in on what I'm doing wrong, if they can understand what I'm even trying to explain. My apologies if it's not clear. I'm sure this is a fairly simple fix but I'm not sure what to do. Shell scripting is not a strong point of mine.
One poster suggests rsync, which is overkill.
cp -rp will work fine. if you want to move the files, just mv the directory -- it and everything under it will move too.
The only real problem here is the use of terminating *'s in the command line in the original script. You don't need the *, you're just trying to pass directories to the cp command, you aren't trying to pass it the names of all the files already in the source (and more importantly, the destination).
You could also use a tool like rsync to make sure your source and target are synchronized.
rsync -av ${ROOT}/results/ ${CONS}/results/
You specified that you want to "move" the files, though. Which means deleting the originals after they're copied:
rsync -av --remove-source-files ${ROOT}/results/ ${CONS}/results/
If you start playing around with rsync, be sure to read the man page about how it treats trailing slashes.

Problem deleting .svn directories on Windows XP

I don't seem to have this problem on my home laptop with Windows XP, but then I don't do much work there.
On my work laptop, with Windows XP, I have a problem deleting directories when it has directories that contain .svn directories. When it does eventually work, I have the same issue emptying the Recycle bin. The pop-up window says "Cannot remove folder text-base: The directory is not empty" or prop-base or other folder under .svn
This continued to happen after I changed config of TortoiseSVN to stop the TSVN cache process from running and after a reboot of the system.
Multiple tries will eventually get it done. But it is a huge annoyance because there are other issues I'm trying to fix, so I'm hoping it is related.
'Connected Backup PC' also runs on the laptop and the real problem is that cygwin commands don't always work. So I keep thinking the dot files and dot directories have something to do with both problems and/or the backup or other process scanning the directories is doing it. But I've run out of ideas of what to try or how to identify the problem further.
You don't need to reboot; just open Task Manager and kill TSVNCache.exe.
This is safe, too. It's designed so you can kill it and it will automatically restart when needed.
(As a result of the auto-restart, note that browsing some SVN folders in Explorer, File-Open dialogs, etc. may cause TSVNCache.exe to restart. Keep an eye on Task Manager.)
Tortoise SVN is great but I have found that TSVNCache.exe can hold on to locks and get in the way at times. (Sometimes justified, sometimes not.) As a result, for some automated scripts I run I include commands to kill TSVNCache.exe as part of the scripts so it doesn't get in the way. That's only worth doing if it's an operation you perform often, though.
You can try a few things:
Since you are getting this error frequently, you can use handle.exe from sysinternals to check which process currently have open handles for the .svn\* directory. If handle utility tells you about any process, try stopping that process and then delete the directories.
Error while deleting from recycle bin: In simple terms, when a file is sent to recycle bin after deleting, it is not actually deleted, rather, a few manipulations are done in directory hierarchy (file system level) to avoid showing the file while browsing content of a folder. So If you happen to resolve the problem mentioned in comment#1, you will not get this error probably.
Cygwin command not working: Running a cygwin command on windows requires (in particular) cygwin1.dll, which is known to be shipped with other programs (eg: CopSsh, some version of svn clients etc...) as well. If there is any mismatch in the version of cygwin1.dll, cygwin commands won't work. Try searching for cygwin1.dll on your computer and try to resolve version conflicts (if any).
did you ever do mkpasswd and mkgroup for cygwin? If you're using cygwin from the command line you are pretty much guaranteed to have file system permissions issues. and you have to read a little to fix it.
http://cygwin.com/cygwin-ug-net/ntsec.html
Try this answer from me. Although it's given for TortioseGit instead of TortoiseSVN, the handling is the same:
disable the status cache (i.e. prevent the TSVNCache.exe from accessing the .svn folders continuously)
delete what you have to delete
enable the status cache to get updated overlays again
I have just experienced this problem (or similar)
I am using tortoise 1.6.7
To fix it I went to 'Tortoise Settings' from the tortoise context menu.
from there select "Icon Overlays" in the tree widget.
In the icon overlays page, I entered the path that was giving me angst into the "exclude paths:"and tortoise no longer holds that directory handle.
This is a directory that is often deleted by a process other than explorer.
Since what it appears that you are trying to do is export the repository from SVN, why not use the export functionality with TortoiseSVN. This removes all .svn directories from the generated 'working copy'.Cmdline: http://svnbook.red-bean.com/en/1.0/re10.html
If you want to delete all sub folders named .svn in windows
then create batch file with this content:
for /f "tokens=* delims=" %%i in ('dir /s /b /a:d *.svn') do (
rd /s /q "%%i"
)
save it in a file del_All_Dot_SVN_Folders.cmd . Run it. Your done.
Thanks to http://www.axelscript.com/2008/03/11/delete-all-svn-files-in-windows/
Remember the above code has .svn whereas the code in the link has only *svn so its better
to have the .svn to not accidentally have undesired effect.

Rsync bash script and hard linking files

I am creating a bash script to backup my files with rsync.
Backups all come from a single directory.
I only want new or modified files to be backed up.
Currently, I am telling rsync to backup the dir, and to check the files compared to the last backup.
The way I am doing this is
THE_TIME=`date "+%Y-%m-%dT%H:%M:%S"`
rsync -aP --link-dest=/Backup/Current /usr/home/user/backup /Backup/Backup-$THE_TIME
rm -f /Backup/Current
ln -s /Backup/Backup-$THE_TIME /Backup/Current
I am pretty sure I have the syntax correct for this. Each backup will check against the "Current" folder, and upload only as necesary. It will then delete the Current folder, and re-create the symlink to the newest backup it just did.
I am getting an error when I run the script:
rsync: link "/Backup/Backup-2010-08-04-12:21:15/dgs1200series_manual_310.pdf"
=> /Backup/Current/dgs1200series_manual_310.pdf
failed: Operation not supported (45)
The host OS is running HFS filesystem, which supports hard linking. I am trying to figure out if something else is not supporting this, or if I have a problem in my code.
Thanks for any help
Edit:
I am able to create a hard link on my local machine.
I am also able to create a hard link on the remote server (when logged in locally)
I am NOT able to create a hard link on the remote server when mounted via afp. Even if both files exist on the server.
I am guessing this is a limitation of afp.
Just in case your command line is only an example: Be sure to always specify the link-dest directory with an absolute pathname! That’s something which took me quite some time to figure out …
Two things from the man page stand out that are worth checking:
If file's aren't linking, double-check their attributes. Also
check if some attributes are getting forced outside of rsync's
control, such a mount option that squishes root to a single
user, or mounts a removable drive with generic ownership (such
as OS X's “Ignore ownership on this volume” option).
and
Note that rsync versions prior to 2.6.1 had a bug that could
prevent --link-dest from working properly for a non-super-user
when -o was specified (or implied by -a). You can work-around
this bug by avoiding the -o option when sending to an old rsync.
Do you have the "ignore ownership" option turned on? What version of rsync do you have?
Also, have you tried manually creating a similar hardlink using ln at the command line?
I don't know if this is the same issue, but I know that rsync can't sync a file when the destination is a FAT32 partition and the filename has a ":" (colon) in it. [The source filesystem is ext3, and the destination is FAT32]
Try reconfiguring the date command so that it doesn't use a colon and see if that makes a difference.
e.g.
THE_TIME=`date "+%Y-%m-%dT%H_%_%S"`

Resources