I have a windows network in which many files are shared across many users with full control. I have a folder shared for everyone in my system, so whenever I try to access it using the machine name (run->\Servername) from another system, I can see the shared folder and open/write files in it.
But my requirement is to close any open files(in my system) in network. So I used NetFileEnum to list all opened file ids so that I can close those files using NetFileClose API.
But the problem is NetFileEnum returns invalid junk ids like 111092900, -1100100090 etc so that I can't close it from another machine. So I listed the network opened files using net file command and by noting the id, say it be 43 I hard coded the id in my function call NetFileClose("Servername", 43); But when I executed, I got ACCESS_DENIED_ERROR. But if the same code is run on the server, it is successfully closing the files. I had given full permission in share for all users.
But why ACCESS_DENIED_ERROR and why NetFileEnum returning invalid ids? Is there anything to be done for this API to work? How can I use these APIs properly to close network opened files?
Related
I am using MS Access 2016. I have a project which in part creates and moves folders around. I am currently using Application.CurrentPath & [form control value] & "\" (etc.) to store the created folders (and to move them when users are done). When I deploy I want to split the db so I can work on the front-end without disturbing users, and so that multiple users can work simultaneously. The back-end will be stored at a shared network location. Users will get a copy of the front-end on their PC to speed things up.
If I do it this way, the folders that the solution creates will be created in the same directory as the users' front-end (no bueno.)
Should I change the code to be hard-coded to some network location (which means moving it is a pain) or change the code to refer to the back-end location? If I refer to the back-end location, what does that code look like?
Many thanks in advance!
I'm using WER to generate crash dumps for my application using this method:
https://learn.microsoft.com/en-us/windows/win32/wer/collecting-user-mode-dumps
My app runs in pods in Kubernetes, and I'm writing these dumps to a network share mounted in the pods. I then have a web application that queries this share for dump files and displays them in a view for download.
The problem I'm running into is the file name of the dumps. The file name is in this format:
<exe name><PID>.dmp
I'd like to add some identifying information (particularly hostname) to the name of the file. Is this possible? I've been searching Google since yesterday (using various phrasing, etc.) but I'm coming up blank.
Admittedly this question I'm asking is just to assist me in an argument I've been scheduled to have with a client.
Our Dev's who reside in another country have an FTP server which has mostly full public access available to all anonymous users, this to simplify the acquisition of new documents and updates for users of the application.
One directory in specific, let's call it updates, actually houses all the new updates but does not grant a directory list to anonymous users due to Access restrictions, so if you try to list the files in the directory using an FTP client, you're met with the generic response:
550 Access is denied.
Failed to retrieve directory listing
However, if you have the exact URL for a file available to the anonymous users in that directory e.g. ftp://ftp.company.com/updates/latest_update_1.zip you can very easily download that file without issue.
My question comes in that I have a client who is somehow monitoring that directory as an anonymous user and knows when a new file (which anonymous has access to) becomes available in that directory and then immediately downloads it. This directly affects their application as often times files are dropped there by Devs during QA and they're not officially available as we've not yet sent out notice of the change log and URL.
So my question is, how exactly is this client doing this? How is he able to list files that anonymous has access to, in a directory which does not list it's files to anonymous users?
For an application I'm writing, i want to programatically find out what computer on the network a file came from. How can I best accomplish this?
Do I need to monitor network transactions or is this data stored somewhere in Windows?
When a file is copied to the local system Windows does not keep any record of where it was copied. So unless the application that created it saved such information in the file then it will be lost.
With file auditing file and directory operations can be tracked, but I don't think that will include the source path with file copies (just who created it and when).
Yes, it seems like you would either need to detect the file transfer based on interception of network traffic, or if you have the ability to alter the file in some way, use public key cryptography to sign files using a machine-specific key before they are transferred.
Create a service on either the destination computer, or on the file hosting computers which will add records to an Alternate Data Stream attached to each file, much the way that Windows handles ZoneInfo for files downloaded from the internet.
You can have a background process on machine A which "tags" each file as having been tagged by machine A on such-and-such a date and time. Then when machine B downloads the file, assuming we are using NTFS filesystems, it can see the tag from A. Or, if you can't have a process at the server, you can use NTFS streams on the "client" side via packet sniffing methods as others have described. The bonus here is that future file-copies will retain the data as long as it is between NTFS systems.
Alternative: create a requirement that all file transfers must be done through a Web portal (as opposed to network drag-and-drop). Built in logging. Or other type of file retrieval proxy. Do you have control over procedures such as this?
I have a Windows service developed in VB.NET. This Windows service picks a file every night at 8 PM from copies a file from my C:\ftpDocs to Y:\FtpDocs folder.
Y: is a mapped drive which is \\sourceServer\Output files. When I run the same code from a VB.NET Windows application instead of a Windows service it is working absolutely fine. But from the service it is throwing access denied error accessing \\sourceServer\Output.
It seems the Windows service runs from C:\windows\system32. For this reason I tried changing the current directory to C:\ftpService (This is the folder where my application is).
To access the mapped drive I provide a userid and password which is not my Windows userid and password. Do you think this is the reason why it is not able to access it from the Windows service?
If yes, how is it working from Windows application? This issue is not going away for the past one month now.
What drives are currently mapped is maintained per user -- it'd be a big no-no for me to be able to access files on a share on which you have credentials just because we're both logged on at the same time.
Your service will need to map the share itself using saved credentials of some kind (you could hard code them, if you like, though that's not terribly secure and represents a maintainability burden besides). A good example of how to do this is here -- though, I haven't used this code, I've just read the article.
Typically a Windows service runs under an id whose credentials are not authorized to access files on the network. Try running your windows service under the domain account which can access the network files. Make sure that this account has access to both the network and local folders/files that it will be reading and writing.
Also, you'll want to use the UNC path, not a mapped drive. The mapped drive won't be mounted for the service.