Is it possible to access kernel objects on remote computers? I was reading that you could access remote kernel objects by using a symbolic link to \Device\Mup\server\object but I am not sure if that would work. Thanks for the help!
I know this is a little odd but I was trying to access a named pipe.
This is not possible in the general case - Mup is the broker that chooses which remote filesystem (WebDav/SMB/NFS) to engage for a particular UNC path. What kernel objects are you trying to access specifically?
Edit: Named pipes are definitely doable - try the syntax:
\\machinename\pipe\nameofpipe
Keep in mind that the pipe has to be ACLed appropriately
Related
Is it possible to attach an FS filter driver on a mapped network drive?
I'm really new to this filter driver work and currently testing a demo version of an SDK. It works fine on my local drive and I was able monitor and even deny file creation. But it doesn't seem to work on a mapped network drive.
So my question is: Is it even possible to do that?
Of course.
You simply need to check in you instance attach if your filesystem type is a network filesystem.
Read more here, especially the VolumeDeviceType and VolumeFilesystemType parameters.
Good luck,
Gabriel
I have to write a script on Lotus Server which is on Windows server to save a csv file on UNIX server. I and Unix server path requires authentication. So can somebody help me or suggest me how to do it?
Thanks in advance.
Siddhartha
Could setting up a FTP server on Domino and accessing this from your UNIX server be an option ?
Mindoo FTP server
I once resolved this in two steps:
1. Save the file to a temporary directory on the D omino server using LotusScript
2. Create a scheduled taks on the windowd serverr to copy the file to the second server
Advantages:
You can specify any user in the scheduled task and you don`t have to care about accessibility of the other server.
Disadvantages
Two separate processes.
Hope that helos.
Michael
In my scenario which was very similar to yours, I did the following:
On the Windows Server, I created a Mapped Drive to the folder on the Unix OS. This also managed the Authentication.
In the LotusScript Agent, I extracted to this Mapped Drive, which worked 100%.
You need to provide more details. Presuming you can access the Unix folder from Windows Explorer, map the drive and let Windows store the password. Then access it through the mapped drive letter.
LotusScript can't write to UNC locations, so you need the drive letter.
That file will be probably picked up by another program. CVS is the worst approach. You could offer to write to a Web Service or provide one.
Update
On Unix "access" more often than not doesn't mean a CIFS (a.k.a Windows share) access, but SSH (or FTP). For SSH you would want to:
configure SSH Keys, so you actually don't need username/password any more
use a Java library as asked on Stackoverflow before (or an alternative)
you also could write the file to a temp directory and call a cmd file for the copy operation
With a little care (make the cmd file configurable) the stuff will work when moving your Domino to Unix/Linux too
Let us know how it goes
I have a daemon that forks the process.
This daemon access a database using mysql connector library.
When I do not fork, I am able to open and read a database fine, however, when I fork, I get
MySQL server has gone away
errors consistently on the first query...
Anyone know what could be causing this?
Edit Oh, my apologies for misinterpreting
Still the problems with differences between daemonized/non-daemonized are roughly with the following class of options:
environment variables
LIBPATH
PATH
HOME, UID, EUID (HOME surprisingly enough gets (ab)used way too often)
mysql specific variables
permissions
what user is the daemon running as? elevated or privilege separation?
current working directory (traditionally / for daemons, where / might be a chroot jail instead of 'real' /)
Starting with kernel 2.4.19, Linux provides per-process mount namespaces. A
mount namespace is the set of file system mounts that are visible to a
process. Mount-point namespaces can be (and usually are) shared between
multiple processes, and changes to the namespace (i.e., mounts and unmounts)
by one process are visible to all other processes sharing the same namespace.
(The pre-2.4.19 Linux situation can be considered as one in which a single
namespace was shared by every process on the system.)
detached stdin/stdout causing trouble (IMO that would mean badly designed library, but who am I)
watch it that specific resources (file locks, socket connections, threads (!)) are NOT inherited across fork/execve. I recommend reading the linked on daemonization (below), especially for example the section on 'Mutual Exclusion and Running a Single Copy [open,lockf,getpid]'
I'm sure I'm forgetting stuff
Ermm... what are you starting a mysql server process for? Mysql has plenty of sound init scripts that do work.
On the subject of proper daemonization: http://www.enderunix.org/docs/eng/daemon.php
Pay attention to the effects of sharing resources with fork children (e.g. file descriptors).
Besides that, you could just be missing basic environment settings. Peruse the official init scripts for mysql to find out which you need.
For an application I'm writing, i want to programatically find out what computer on the network a file came from. How can I best accomplish this?
Do I need to monitor network transactions or is this data stored somewhere in Windows?
When a file is copied to the local system Windows does not keep any record of where it was copied. So unless the application that created it saved such information in the file then it will be lost.
With file auditing file and directory operations can be tracked, but I don't think that will include the source path with file copies (just who created it and when).
Yes, it seems like you would either need to detect the file transfer based on interception of network traffic, or if you have the ability to alter the file in some way, use public key cryptography to sign files using a machine-specific key before they are transferred.
Create a service on either the destination computer, or on the file hosting computers which will add records to an Alternate Data Stream attached to each file, much the way that Windows handles ZoneInfo for files downloaded from the internet.
You can have a background process on machine A which "tags" each file as having been tagged by machine A on such-and-such a date and time. Then when machine B downloads the file, assuming we are using NTFS filesystems, it can see the tag from A. Or, if you can't have a process at the server, you can use NTFS streams on the "client" side via packet sniffing methods as others have described. The bonus here is that future file-copies will retain the data as long as it is between NTFS systems.
Alternative: create a requirement that all file transfers must be done through a Web portal (as opposed to network drag-and-drop). Built in logging. Or other type of file retrieval proxy. Do you have control over procedures such as this?
I am able to get/set security attributes (group, owner, DACL, SACL) of files on a NTFS volume by using the GetSecurityInfo/SetSecurityInfo API. The handles I pass to these APIs must be opened with specific access rights (READ_CONTROL, ACCESS_SYSTEM_SECURITY, WRITE_DAC, WRITE_OWNER) which require certain privileges (SE_SECURITY, SE_BACKUP, SE_RESTORE) to be enabled while creating them with CreateFile, which is no problem at all if the files are located on an NTFS volume, and of course if the calling process has sufficient rights. There is a problem, however, if the files are actually located on a network share - creating the file handles would fail with ACCESS_DENIED(5) or PRIVILEGE_NOT_HELD(1314). I guess this is due to the fact that the attempt to create the file handle is actually made on the remote machine in the context of a network logon session which represents my user on the remote machine, and the required privileges are not enabled for that remote process. Is there a way I can get past this limitation, i.e. be able to get/set security attributes of files on network shares?
A similar problem is getting a handle to a directory on a network share. While being able to do it locally (by using FILE_FLAG_BACKUP_SEMANTICS), I understand that this particular flag is not redirected to the remote machine, which I believe is the reason I can't open a handle to a directory on a network share. Is there a way to do this?
Well, it seems I was the one at fault here - I have been testing this case with a user which, although an administrator on my local machine, is a regular restricted user on the file server, which caused all the trouble. You can copy security attributes and open handles to directories on a network share if you connect to it with a user which has sufficient rights on the file server which is sharing the resources.