Is there a way to shorten Network Drive file paths? - windows

We are currently using Network Locations to access our company SharePoint files.
We decided to use Network Location instead of OneDrive, because OneDrive has a delay of 5-15 between a person uploading a file and other people seeing the file on their file explorer.
However, we ran into a problem with Network Locations, since the file explorer only allows file paths up to 260 characters (MAX_PATH). Our Network Location starts with
https://XXXXXXXXXX.sharepoint.com/Shared Documents/{Enter Folder names here}
Is there a way of reducing this start of the file path? Since if it crosses the max path limit, the files wont be accessible through file explorer, only the SharePoint site itself.
I know it would be possible to change the "SHared Documents" part to, for example, SD. But does anyone know how this will affect files that are for example referencing to other files with that name?
Thanks! :)

We could not reduce this start of the file path of the Internent address.
What do other file references to library names mean?
Normally, change the library name will have no impact to files.

Related

Detect incompatible file location (iCloud, Dropbox, shared folders) for custom file format

I’m designing a custom file format. It will be either a monolith file or a folder with smaller files. It’s a rather large file in total and there is no need to load everything into memory at once. It would make it also slower than necessary. One of the file(s) may or may not be database file. Running SQL queries would be useful.
The user can have many such files. The user might want to share files with others even if it takes some time to up/download it.
Conceptually I run into issues with shared network folders, Dropbox, iCloud, etc. Such services can lead to sync issues if the file is not loaded entirely in memory or the database file can get corrupted.
One solution is to prohibit storing the file on such services. Either by using a user/library folder or forcing the user to pick a local folder.
Using a folder in library means recreating a file navigation system like Finder. It limits the choice of the user as well in where the files end up. Limiting the location to a local folder seems the better choice.
Is there a way to programmatically detect if a folder is local?

Is there a way to have multiple files with same backing data in macOS FileProvider extension?

I'm creating a macOS FileProviderExtension for the remote Document Storage System (kind of like GoogleDrive), where it is possible to share a single document with multiple folders.
For example, Document1.pdf can simultaneously exist in Folder A and Folder B because it's shared with both folders. In my FileProvider extension, this would mean that file should be accessible in both folders:
Folder A/Document1.pdf
Folder B/Document1.pdf
But the file provider extension will treat those as two completely separate files. I.e., if you download one of them, and then try to open the other one, it will redownload the other one, effectively doubling the used space on user's disk and consuming network connection.
I'm looking for a way to tell the FileProviderItem what is the backing data for the given file, and thus solve problems such as:
If user downloads a file in one location, ideally I would tell the FileProvider extension that the same document in all the other locations is also now downloaded (cloud icon should disappear from all files).
Some approaches I considered:
I thought of using symbolic links as part of solution, but I don't really think that's possible
When user tries to open non-downloaded file, fetchContents(for itemIdentifier) callback is invoked. Once file is downloaded, I would ideally now notify all the other files of the same document that they are downloaded, i.e. by updating the isDownloaded property in NSFileProviderItem, but that doesn't seem to work. Also, even if I do that, I still can't say to file, what his backing data file should be.
By turning off the Sandbox capability, I guess I could, when user tries to download/open the file which has already been downloaded in other location, immediately report that file has been downloaded and provide the copy of already downloaded file as data for the requested file, but there are two drawbacks here:
3.1. I would have to turn off the Sandbox capability because I want to access the file in FileProvider path directly
3.2 System would still use disk space for each file. So, if I have same document in multiple folders, extension would keep all those copies in the system, without the option to tell it that for all those files, there is same backing data file somewhere in extension's Container.

WS2012R2: Symlink from a network share to another network share?

I have a question according to creating symlinks on network share which link to another network share.
The Windows clients in our company have a network drive mapped on J:\
the UNC path is \\DataServer01\network
previously, there was some kind of a symlink in the network directory called "import" (so the UNC path was \\DataServer01\network\import), which was linking so \\ERPServer01\share\import.
So the users could go to their mapped network drive on J and put a excel file into J:\import - so the excel file was put to \\ERPServer01\share\import in reality.
Accidentaly, the symlink was deleted by another admin. Now I was trying to recreate the symlink using
mklink /d import \\ERPServer01\share\import
And so far the symlink was created, and you could access it from the DataServer01. But - you can't access that symlink from the network drive J:\. If you try this, you receive the error that the symbolic link cannot be accessed. I googled a lot and the reasons why this concept couldn't work (links are resolved relatively by clients) was quite plausible.
The thing is, my predecessor got it to work somehow, he somehow managed to create a proper "symlink" or hard link or something similar. How the hell did he managed to get it to work? Unfortunately I can't ask him.
There is also no DFS in use. It must have beed some other method.
I have to recreate it exactly how it was, because I don't want to explain to 300 users why they have to put their excel sheets in another directory now. And I don't want to map another network drive.
Any ideas?
Possibly it wasn't a symlink before (checked your backup?). Alternatively, you can create a "magic" Explorer folder:
create an empty source folder
inside the source folder, create an Explorer link to the target folder named target
inside the source folder, create a desktop.ini text file with the contents
[.ShellClassInfo]
CLSID2={0AFACED1-E828-11D1-9187-B532F1E9575D}
flag desktop.ini as System and Hidden
flag source folder as System
An Explorer magic link folder looks similar to a symlink but only works with Windows Explorer whereas a symlink works with (nearly) everything, once activated through GPO.

DLL loading with hardlink

I am trying to devise a method which helps to load DLL from a common location for various products. This helps the following directory structure to avoid file replication.
INNSTALLDIR/Product1/bin
INNSTALLDIR/Product2/bin
..
INNSTALLDIR/ProductN/bin>
Instead of replicating DLLs in each product's bin directory above, I can create a DLL repository/directory - 'DLLrepo' in INSTALLDIR and make all product exceutables load from it. I am thinking to do this by creating hardlink to each DLL in 'DLLrepo' in each product's bin directory. This will help to address platforms starting from WinXP. Using 'probing' method can address only Windows server 2008 and above.
I like to get your opinion if this approach looks like a reasonable solution.
When we create hardlink to a file, the explorer or DIR command doesn't account valid size of the folder involving link. It account the actual data size in the linked file in total size of the directory. This is a known issue in windows if I am not wrong. Is there any utility that I can use to verify the actual folder size? Is it possible to use 'chkdisk' on a directory path? Another thing which I like to know is to get the list of links created on file data.
When we create hardlink to a file, the
explorer or DIR command doesn't
account valid size of the folder
involving link. It account the actual
data size in the linked file in total
size of the directory. This is a know
issue in windows if I am not wrong. Is
there any utility that I can use to
verify the actual folder size?
I can provide an answer, of sorts, for this part of the question. When you create file hardlinks, there's not really any concept of which "file" is the original. Each of them points to the space on disk that the data is occupying and modifying the file via any of these references affects the data that's seen when accessing it via any other hardlink. As such it's less a known "issue" and more of a "this is how it works".
As such, there's no way to verify "actual folder size" unless you're looking at the size of the highest common parent folder of the folders that contain the links. At that point you can start single-counting each hard-link to get an accurate idea of space used on disk.

How to copy junction as-is instead of the folder it points to?

I copy set of folders from server 1 to server 2. Amongst files I also have junction: folder with set of config files: on server 1 this junction points to... let's say c:\Config (that contains config1.cfg, config2.cfg)
On server 2 I also have c:\Config with the same set of files, but of course they contains their own settings that I do not want to overwrite.
So what I want to do is to copy junction AS-IS. Instead, I get copies of config1.cfg and config2.cfg from server 1 :(
How to solve this problem??
p.s.1. it's long to explain, but I cannot avoid of using junctions here (it has something to do with limitation of where configuration must be placed (subfolder-'junction' points to 'outside' folder))
p.s.2. OS is Windows Server 2003
FastCopy is a small program that does.
Copying junctions don't make any sense from drive to drive - a junction points to a specific node on disk. What you really want is a Symlink, which points to a specific path in the filesystem, but unfortunately this doesn't exist on Server 2003. You're out of luck here, you'll have to just fix this up in a post-copy script.

Resources