Perserving File Ownership Win7 Share - windows

I am trying to setup a "dropbox" on a Win7 workstation we will use to process simulation jobs. My plan was to pull ownership from file (do a simple dir /q "filename") so I can use the owner information during the simulation (send them an email when done for example).
The problem I have is when the user drops the simulation file on the share I setup, the ownership is set to BUILTIN\Administrators. I have tried tweak the share settings but so far nothing seems to work.
I do have a work around where users can embed their email address in the simulation file and I could pull that. But trying to make it easier as I know somer user will forget to do that... Any ideas how to preserver the ownership inforamation?

Quite possibly, you could embed the owner's email as an alternate data stream into the file. Read this link here.
And with a few powershell scripts, you could write the job owner into the file at submit time and extract it out on the remote machine at run time.
I believe the alternate data stream survives the command line copy command as long as NTFS is used everywhere as the filesystem.

Related

windows registry storage best practice

Background
I've recently been shunted into the world of windows programming and I'm still trying to find my way around the best practices and ways of doing things. So I was just hoping for some pointers on use of the registry
Not particularly relevant but the background is that I am creating an installer in Golang, a couple of points to get out the way on that:
I am aware MSI's would usually be best practice for an installer (I have my reasons for going custom exe)
I know there are more obvious language choices than golang, just go with it
Current registry use
As part of the install process, I store several pieces of data in the registry:
run once commands:
Computer\HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\RunOnce
I create a few entries here: to restart the process after a system reboot and to delete some temp files on reboot after uninstall
an uninstall entry:
Computer\HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\Uninstall\Vendor
Product
Content here is the same as an MSI would create, I was careful not to create any additional custom fields here (all static data until uninstall)
an application entry:
Computer\HKEY_LOCAL_MACHINE\SOFTWARE\Vendor\Product
I store some additional data about the installation here, some of which is needed for uninstall such as state info from before installation (again all static content)
a temporary entry:
Computer\HKEY_CURRENT_USER\SOFTWARE\Vendor\Product
I store some temporary data here which can include some sensitive user entered data (usernames/passwords). I run some symmetric encryption to obscure the data though my understanding is this is area of the registry is encrypted so only the user could access anyway (would like confirmation on that)
This data is used to resume after restart and then deleted
Questions
I'm looking for confirmation / corrections on my current use of the registry?
I now have need to pass some data between an application and a running service, this data would be updated every 1-2 minutes and would be a few bytes of JSON. Does the registry seem like a reasonable place to store variable data like this? If so is there a particular place that better for variable data - I was going to add it to:
Computer\HKEY_LOCAL_MACHINE\SOFTWARE\Vendor\Product
HCKU isn't encrypted to my knowledge. It's stored in a file called NTUser.dat and could be loaded as a hive under HKEY_USERS and visible to other processes with sufficient rights to do so.
You would need to open up the rights to HKLM\SOFTWARE\Vendor\Product if you expect a user priv process to be able to write to it. If you want to pass data to a service you might want to use some sort of IPC pipe to do so. Not sure what's available in Golang for this.

Saving CSV file on UNIX Server from windows based Lotus Server using Lotus Scripting

I have to write a script on Lotus Server which is on Windows server to save a csv file on UNIX server. I and Unix server path requires authentication. So can somebody help me or suggest me how to do it?
Thanks in advance.
Siddhartha
Could setting up a FTP server on Domino and accessing this from your UNIX server be an option ?
Mindoo FTP server
I once resolved this in two steps:
1. Save the file to a temporary directory on the D omino server using LotusScript
2. Create a scheduled taks on the windowd serverr to copy the file to the second server
Advantages:
You can specify any user in the scheduled task and you don`t have to care about accessibility of the other server.
Disadvantages
Two separate processes.
Hope that helos.
Michael
In my scenario which was very similar to yours, I did the following:
On the Windows Server, I created a Mapped Drive to the folder on the Unix OS. This also managed the Authentication.
In the LotusScript Agent, I extracted to this Mapped Drive, which worked 100%.
You need to provide more details. Presuming you can access the Unix folder from Windows Explorer, map the drive and let Windows store the password. Then access it through the mapped drive letter.
LotusScript can't write to UNC locations, so you need the drive letter.
That file will be probably picked up by another program. CVS is the worst approach. You could offer to write to a Web Service or provide one.
Update
On Unix "access" more often than not doesn't mean a CIFS (a.k.a Windows share) access, but SSH (or FTP). For SSH you would want to:
configure SSH Keys, so you actually don't need username/password any more
use a Java library as asked on Stackoverflow before (or an alternative)
you also could write the file to a temp directory and call a cmd file for the copy operation
With a little care (make the cmd file configurable) the stuff will work when moving your Domino to Unix/Linux too
Let us know how it goes

WindowsAzure: Is it possible to set directory permissions within the web.config?

A PHP scriptof mine wants to write into a log folder, the resulting error is:
Unable to open the log file "E:\approot\framework\log/dev.log" for writing.
When I set the writing permissions for the WebRole User RD001... manually it works fine.
Now I want to set the folder permissions automatically. Is there an easy way to get it done?
Please note that I'm very new to IIS and the stuff around, I would appreciate precise answers, thx.
Short/Technical Response:
You could probably set permissions on a particular folder using full-trust and a startup taks. However, you'd need to account for a stateless OS and changing drive letters (possible, not likely) in this script, which would make it difficult. Also, local storage is not persisted, so you'd have no way to ensure this data stayed in the case of a reboot.
Recommendation: Don't write local, read below ...
EDIT: Got to thinking about this, and while I still recommend against this, there is a 3rd option: You can allocate local storage in the service config, then access it from PHP using a dll reference, then you will have access to that folder. Please remember local storage is not persisted, so it's gone during a reboot.
Service Config for local:
http://blogs.mscommunity.net/blogs/dadamec/archive/2008/12/11/azure-reading-and-writing-with-localstorage.aspx
Accessing config from php:
http://phpazure.codeplex.com/discussions/64334?ProjectName=phpazure
Long / Detailed Response:
In Azure, you really are encouraged to approach things as a platform and not as "software on a server". What I mean there is that ideas such as "write something to a local log file" are somewhat incompatible with the cloud "idea". Depending on your usage, you could (and should) convert this script to output this data to some cloud-based or external storage, vs just placing it on the disk.
I would suggest modifying this script to leverage the PHP Azure SDK and write these log entries out to table or blob storage in Azure. If this sounds good, please provide the PHP and I can give an exact example.
The main reason for that (besides pushing the cloud idea) is that in Azure, you cannot assume the host machine ("role instance") will maintain an OS state, so while you can set some things such as folder permissions, you can't rely on them sticking that way. You have no real way to guarantee those permissions won't be reset when the fabric has to update your role and react to some lower level problem. For example, a hard-drive cage on the rack where your current instance lives could fail. If the failure were bad enough, the Fabric controller would need to rebuild your instance. When that happens, your code is moved to an entirely different server, so the need would arise to re-set those permissions. Also, depending on the changes, the E:\ could all of a sudden need to be the F:\ or X:\ drive and you wouldn't know.
Its much better to pretend (at some level) that your application is running "in Azure" and not "on a server in azure", so you make no assumptions about the hosting environment. So anything you need outside of your code (data, logs, audits, etc) should be stored somewhere you can control (Azure Storage, external call-out, etc)

What's the best way to (programatically) determine a file's network origin?

For an application I'm writing, i want to programatically find out what computer on the network a file came from. How can I best accomplish this?
Do I need to monitor network transactions or is this data stored somewhere in Windows?
When a file is copied to the local system Windows does not keep any record of where it was copied. So unless the application that created it saved such information in the file then it will be lost.
With file auditing file and directory operations can be tracked, but I don't think that will include the source path with file copies (just who created it and when).
Yes, it seems like you would either need to detect the file transfer based on interception of network traffic, or if you have the ability to alter the file in some way, use public key cryptography to sign files using a machine-specific key before they are transferred.
Create a service on either the destination computer, or on the file hosting computers which will add records to an Alternate Data Stream attached to each file, much the way that Windows handles ZoneInfo for files downloaded from the internet.
You can have a background process on machine A which "tags" each file as having been tagged by machine A on such-and-such a date and time. Then when machine B downloads the file, assuming we are using NTFS filesystems, it can see the tag from A. Or, if you can't have a process at the server, you can use NTFS streams on the "client" side via packet sniffing methods as others have described. The bonus here is that future file-copies will retain the data as long as it is between NTFS systems.
Alternative: create a requirement that all file transfers must be done through a Web portal (as opposed to network drag-and-drop). Built in logging. Or other type of file retrieval proxy. Do you have control over procedures such as this?

service doesn't behave the same as command line

I am running on a Windows Server 2003. This is my problem:
I wrote a Perl script to automate the copy of some files from my Server machine to some network drives. I am using xcopy to copy the files. My problem is the permissions.
If I run the script from the command line, it works, all the copies are successful.
If I try to run the script using a service all the copies fail. This service is a program that I wrote that takes the script and runs it. In the background all it is doing is to call the C function 'system' and it runs the same program that I can run from the command line.
I have tried many variations of this to figure out what is wrong with it but I can't see why the service would not run the same way I run it from the command line.
I set up the service to run as the same user I am using from the command line.
I also tried to map the network drives as the user that has writing permission but the result is the same. Manually the script works, from the service, it doesn't.
Any suggestion is appreciated.
Thanks
Tony
The service may be running as the system and not have access to the network drives. In the Service settings, change the service to run under your account (or an account with the relevant permissions/mappings).
When the service runs, it uses whatever credentials you specify in the Services manager of Windows. The default, LOCAL SERVICE, probably does not have permission to access the resources to be copied.
Create a new user account with the minimum set of permissions needed to perform the copy and configure your service to run under that account.
I did figure out the issue (I think), and that matches what I later found in another post:
https://serverfault.com/questions/4623/windows-can-i-map-a-network-drive-for-a-service-account
<...Persistent drive mappings are only restored during an interactive login, which the service does not use. I believe the only way to get a service to use a network drive is for that service to map the drive itself or alternatively for it to us a UNC path instead of a mapped drive.>
What I did was mapping the drive using the service and that seems to work. It turns out, if I map the drive and save credentials, then I can access later the drive without having to map it again. I don't know why this approach seems to work though.
-Thanks everybody for your help.
Tony

Resources