I have a Windows service developed in VB.NET. This Windows service picks a file every night at 8 PM from copies a file from my C:\ftpDocs to Y:\FtpDocs folder.
Y: is a mapped drive which is \\sourceServer\Output files. When I run the same code from a VB.NET Windows application instead of a Windows service it is working absolutely fine. But from the service it is throwing access denied error accessing \\sourceServer\Output.
It seems the Windows service runs from C:\windows\system32. For this reason I tried changing the current directory to C:\ftpService (This is the folder where my application is).
To access the mapped drive I provide a userid and password which is not my Windows userid and password. Do you think this is the reason why it is not able to access it from the Windows service?
If yes, how is it working from Windows application? This issue is not going away for the past one month now.
What drives are currently mapped is maintained per user -- it'd be a big no-no for me to be able to access files on a share on which you have credentials just because we're both logged on at the same time.
Your service will need to map the share itself using saved credentials of some kind (you could hard code them, if you like, though that's not terribly secure and represents a maintainability burden besides). A good example of how to do this is here -- though, I haven't used this code, I've just read the article.
Typically a Windows service runs under an id whose credentials are not authorized to access files on the network. Try running your windows service under the domain account which can access the network files. Make sure that this account has access to both the network and local folders/files that it will be reading and writing.
Also, you'll want to use the UNC path, not a mapped drive. The mapped drive won't be mounted for the service.
Related
I have to write a script on Lotus Server which is on Windows server to save a csv file on UNIX server. I and Unix server path requires authentication. So can somebody help me or suggest me how to do it?
Thanks in advance.
Siddhartha
Could setting up a FTP server on Domino and accessing this from your UNIX server be an option ?
Mindoo FTP server
I once resolved this in two steps:
1. Save the file to a temporary directory on the D omino server using LotusScript
2. Create a scheduled taks on the windowd serverr to copy the file to the second server
Advantages:
You can specify any user in the scheduled task and you don`t have to care about accessibility of the other server.
Disadvantages
Two separate processes.
Hope that helos.
Michael
In my scenario which was very similar to yours, I did the following:
On the Windows Server, I created a Mapped Drive to the folder on the Unix OS. This also managed the Authentication.
In the LotusScript Agent, I extracted to this Mapped Drive, which worked 100%.
You need to provide more details. Presuming you can access the Unix folder from Windows Explorer, map the drive and let Windows store the password. Then access it through the mapped drive letter.
LotusScript can't write to UNC locations, so you need the drive letter.
That file will be probably picked up by another program. CVS is the worst approach. You could offer to write to a Web Service or provide one.
Update
On Unix "access" more often than not doesn't mean a CIFS (a.k.a Windows share) access, but SSH (or FTP). For SSH you would want to:
configure SSH Keys, so you actually don't need username/password any more
use a Java library as asked on Stackoverflow before (or an alternative)
you also could write the file to a temp directory and call a cmd file for the copy operation
With a little care (make the cmd file configurable) the stuff will work when moving your Domino to Unix/Linux too
Let us know how it goes
I'm mounting a folder to a virtual disk like this (I'm on Windows XP):
subst z: c:\virtual_disk
This works perfectly okay for one thing, I have a Service (created with C++ / CreateService(...) ) running and it can write files to c:\virtual_disk but not to z:\
I'm using classic fopen, fwrite etc.
I have, I think, narrowed down the problem to some sort of permission problem, I'm not sure though.
The service runs on the "localSystem" account and the folder is mounted by me using an administrator account.
Any help appreciated!
Mapping is per user. If you mapped Z using your own account, it won't catch for the local system account. You can either run the service under your own account, or have it do its own mapping.
A PHP scriptof mine wants to write into a log folder, the resulting error is:
Unable to open the log file "E:\approot\framework\log/dev.log" for writing.
When I set the writing permissions for the WebRole User RD001... manually it works fine.
Now I want to set the folder permissions automatically. Is there an easy way to get it done?
Please note that I'm very new to IIS and the stuff around, I would appreciate precise answers, thx.
Short/Technical Response:
You could probably set permissions on a particular folder using full-trust and a startup taks. However, you'd need to account for a stateless OS and changing drive letters (possible, not likely) in this script, which would make it difficult. Also, local storage is not persisted, so you'd have no way to ensure this data stayed in the case of a reboot.
Recommendation: Don't write local, read below ...
EDIT: Got to thinking about this, and while I still recommend against this, there is a 3rd option: You can allocate local storage in the service config, then access it from PHP using a dll reference, then you will have access to that folder. Please remember local storage is not persisted, so it's gone during a reboot.
Service Config for local:
http://blogs.mscommunity.net/blogs/dadamec/archive/2008/12/11/azure-reading-and-writing-with-localstorage.aspx
Accessing config from php:
http://phpazure.codeplex.com/discussions/64334?ProjectName=phpazure
Long / Detailed Response:
In Azure, you really are encouraged to approach things as a platform and not as "software on a server". What I mean there is that ideas such as "write something to a local log file" are somewhat incompatible with the cloud "idea". Depending on your usage, you could (and should) convert this script to output this data to some cloud-based or external storage, vs just placing it on the disk.
I would suggest modifying this script to leverage the PHP Azure SDK and write these log entries out to table or blob storage in Azure. If this sounds good, please provide the PHP and I can give an exact example.
The main reason for that (besides pushing the cloud idea) is that in Azure, you cannot assume the host machine ("role instance") will maintain an OS state, so while you can set some things such as folder permissions, you can't rely on them sticking that way. You have no real way to guarantee those permissions won't be reset when the fabric has to update your role and react to some lower level problem. For example, a hard-drive cage on the rack where your current instance lives could fail. If the failure were bad enough, the Fabric controller would need to rebuild your instance. When that happens, your code is moved to an entirely different server, so the need would arise to re-set those permissions. Also, depending on the changes, the E:\ could all of a sudden need to be the F:\ or X:\ drive and you wouldn't know.
Its much better to pretend (at some level) that your application is running "in Azure" and not "on a server in azure", so you make no assumptions about the hosting environment. So anything you need outside of your code (data, logs, audits, etc) should be stored somewhere you can control (Azure Storage, external call-out, etc)
I am running on a Windows Server 2003. This is my problem:
I wrote a Perl script to automate the copy of some files from my Server machine to some network drives. I am using xcopy to copy the files. My problem is the permissions.
If I run the script from the command line, it works, all the copies are successful.
If I try to run the script using a service all the copies fail. This service is a program that I wrote that takes the script and runs it. In the background all it is doing is to call the C function 'system' and it runs the same program that I can run from the command line.
I have tried many variations of this to figure out what is wrong with it but I can't see why the service would not run the same way I run it from the command line.
I set up the service to run as the same user I am using from the command line.
I also tried to map the network drives as the user that has writing permission but the result is the same. Manually the script works, from the service, it doesn't.
Any suggestion is appreciated.
Thanks
Tony
The service may be running as the system and not have access to the network drives. In the Service settings, change the service to run under your account (or an account with the relevant permissions/mappings).
When the service runs, it uses whatever credentials you specify in the Services manager of Windows. The default, LOCAL SERVICE, probably does not have permission to access the resources to be copied.
Create a new user account with the minimum set of permissions needed to perform the copy and configure your service to run under that account.
I did figure out the issue (I think), and that matches what I later found in another post:
https://serverfault.com/questions/4623/windows-can-i-map-a-network-drive-for-a-service-account
<...Persistent drive mappings are only restored during an interactive login, which the service does not use. I believe the only way to get a service to use a network drive is for that service to map the drive itself or alternatively for it to us a UNC path instead of a mapped drive.>
What I did was mapping the drive using the service and that seems to work. It turns out, if I map the drive and save credentials, then I can access later the drive without having to map it again. I don't know why this approach seems to work though.
-Thanks everybody for your help.
Tony
I am able to get/set security attributes (group, owner, DACL, SACL) of files on a NTFS volume by using the GetSecurityInfo/SetSecurityInfo API. The handles I pass to these APIs must be opened with specific access rights (READ_CONTROL, ACCESS_SYSTEM_SECURITY, WRITE_DAC, WRITE_OWNER) which require certain privileges (SE_SECURITY, SE_BACKUP, SE_RESTORE) to be enabled while creating them with CreateFile, which is no problem at all if the files are located on an NTFS volume, and of course if the calling process has sufficient rights. There is a problem, however, if the files are actually located on a network share - creating the file handles would fail with ACCESS_DENIED(5) or PRIVILEGE_NOT_HELD(1314). I guess this is due to the fact that the attempt to create the file handle is actually made on the remote machine in the context of a network logon session which represents my user on the remote machine, and the required privileges are not enabled for that remote process. Is there a way I can get past this limitation, i.e. be able to get/set security attributes of files on network shares?
A similar problem is getting a handle to a directory on a network share. While being able to do it locally (by using FILE_FLAG_BACKUP_SEMANTICS), I understand that this particular flag is not redirected to the remote machine, which I believe is the reason I can't open a handle to a directory on a network share. Is there a way to do this?
Well, it seems I was the one at fault here - I have been testing this case with a user which, although an administrator on my local machine, is a regular restricted user on the file server, which caused all the trouble. You can copy security attributes and open handles to directories on a network share if you connect to it with a user which has sufficient rights on the file server which is sharing the resources.