Global Semaphore ignores local Semaphore - windows

I have an application that accesses some files and system ressources, so there may only be one instance of the application active. That is accomplished by creating a named Semaphore and stopping the applicationrun when the Semaphore is already assigned.
In the past (read: when Windows XP was the most common Operating System) that worked well, but now we noticed that the old code did not work with multiple user sessions.
Here the old code:
hInstanceSem := CreateSemaphore(nil, 0, 1, PChar(GetProductName(Application.ExeName)));
if (hInstanceSem <> 0) and (GetLastError = ERROR_ALREADY_EXISTS) then
// do not run the Application
So I did some research, learned about global Semaphores and changed the code to this:
function CreateGlobalSemaphor(SemaphorName: String): Cardinal;
var
desc: SECURITY_DESCRIPTOR;
att : TSecurityAttributes;
sem : Cardinal;
begin
att.nLength := SizeOf(TSecurityAttributes);
att.bInheritHandle := true;
att.lpSecurityDescriptor := #desc;
InitializeSecurityDescriptor(att.lpSecurityDescriptor, SECURITY_DESCRIPTOR_REVISION);
SetSecurityDescriptorDacl(att.lpSecurityDescriptor, True, nil, False);
sem := CreateSemaphore(#att, 0, 1, PChar('Global\' + SemaphorName));
if (sem <> 0) and (GetLastError() <> ERROR_ALREADY_EXISTS) then begin
Result := sem;
end else begin
Result := 0;
CloseHandle(sem);
end;
end;
if CreateGlobalSemaphor(GetProductName(Application.ExeName)) = 0 then
// do not run the Application
Now, when I start the application on User1, change to User2 and try starting the Application, it will not run (as intended).
BUT when I run an older version of my program and start the current version with the new code in the same user session, the new code ignores the Semaphore created by the older code and a second instance of my application is started. (Needless to say that it crashes...)
It seems to me that the local Semaphore is out of the scope of the global Semaphore, else a second object with the same name could not be created.
My question is: How can the global Semaphore (new code) detect, that a local Semaphore (old code) with the same name is already assigned?
Please keep in mind that this is a problem of backward compatibility. I cannot simply recompile and redistribute the older versions of my application.

The documentation for kernel object namespaces explains that:
For processes started under a client session, the system uses the session namespace by default.
Since the old program does not explicitly include a namespace, the session namespace, Local\ is used. This means that the old program creates a semaphore named Local\xxx. Now, the new program uses a semaphore named Global\xxx. So you have two distinct semaphores and the programs are completely unaware of themselves.
If you want the new program to interact with the old program, you must use an object named Local\xxx.
If you wish the new program to block other new programs in different sessions, you must use an object named Global\xxx.
The obvious conclusion to draw here is that you need to create two objects. One named Local\xxx and one named Global\xxx.
Note that it is not possible to backfit cross-session exclusion to the existing programs. They already use Local\xxx and there's no way to you to change that now.
You must also fix the error handling in your new code. You call CreateSemaphore and then go on to call GetLastError without first checking the value returned by the call to CreateSemaphore.

Related

Associate text with a mutex

I have a program that checks only one copy of itself is running: (C++ pseudocode)
int main()
{
HANDLE h_mutex = CreateMutex(NULL, TRUE, "MY_APP_NAME");
if ( !h_mutex )
{
ErrorMessage("System object already exists");
return EXIT_FAILURE;
}
else if ( GetLastError() == ERROR_ALREADY_EXISTS )
{
ErrorMessage("App is already running");
return EXIT_FAILURE;
}
// rest of code
ReleaseMutex(h_mutex);
CloseHandle(h_mutex);
return 0;
}
I would like to improve the error message "App is already running", and instead have it say "App is already running - started by USER at DATETIME, pid PID, OTHERINFO".
Is it possible for the first instance of my application to "register" a text string when creating the Mutex (or just after that); so that when another instance of my application detects that the Mutex already exists, it it can retrieve that text string and display that information?
You could use CreateFileMapping and MapViewOfFile to share a structure between the existing process and the newly started process. You would need to create a named Event as well as the mutex that you already create to ensure that any information your store in the mapping is initialized before you try to read it in the new process.
The basic process would be:
Create the mutex as you do now.
If the mutex did not previously exist then you will use CreateFileMapping to create a named mapping backed by the page file (you'll pass INVALID_HANDLE_VALUE as the file handle). Use MapViewOfFile to map the section into the process address space. Initialize the contents of the shared memory with whatever information you want to share, remember that the address of the shared block will (likely) be different between processes, so don't use any pointers in the data. If you must, you offsets from the mapped address to make references (only within the shared section). Use CreateEvent to create a named manual reset event, use SetEvent to set the named event.
If the mutex existed previously, use CreateEvent to create the named event mentioned in the previous paragraph. Use WaitForSingleObject (or any other wait function) to wait for the named event to become signaled. This wait ensures that the original process has had a chance to initialize the contents of the shared section. Use CreateFileMapping and MapViewOfFile to map the shared section into the process address space and read whatever information you chose to store in the shared area.
Eventually, CloseHandle everything and exit.
As a side note, you do not need to take ownership of the mutex when creating it. The mutex in this case is really just a named object that you can determine whether or not it existed before you tried to create it. You chould use a semaphore, event, or even the shared section from CreateFileMapping.
You can do a lot of thing. You can store the text in a file and when your application opens, read it to check Mutex name. Or you can store it in Registry. Or you can send message to your application window. Still there is ways to do such a thing. You should decide which one is best fit for your application.

LockFileEx returns success, but seems to have no effect

I'm trying to lock a file, because it is sitting on a network drive, and multiple instances of a program from multiple computers need to edit it. To prevent damage, I intend to set it up so that only one of the instances has rights to it at a time.
I implemented a lock, which would theoretically lock the first 100 bytes of the file from any access. I'm using Qt with its own file handling, but it has a method of returning a generic file handle.
QFile file(path);
HANDLE handle = (HANDLE)_get_osfhandle(file.handle());
OVERLAPPED ov1;
memset(&ov1, 0, sizeof(ov1));
ov1.Offset = 0;
ov1.OffsetHigh = 0;
if (handle == INVALID_HANDLE_VALUE)
{
// error
return;
}
LockFileEx(handle, LOCKFILE_FAIL_IMMEDIATELY | LOCKFILE_EXCLUSIVE_LOCK, 0, 100, 0, &ov1);
qDebug() << file.readLine();
LockFileEx() returns 1, so it seems to have been successful. However, if I run the program in multiple instances, all of them can read and print the first line of the file. More than this, I can freely edit the file with any text editor.
Being a network file is not an issue, as it behaves similarly with a local file.
The problem was that, while the program does not terminate, the QFile variable was local, so after finishing the function, the destructor of the QFile was called, so it released the file. The OS then seemed to have released the lock.
If my QFile survives the scope, everything works just fine. A minor issue is, that while I expected the file to be locked against reading, external programs do have a read-only access to it. It's not a problem, as my program can check whether it can create a lock, and detect failure to do so. This means that the intended mutex functionality works.

Communication with two application on Windows with DELPHI

I have two applications and would like the two to communicate texting when a release exception.
The problem is as follows:
in an application I use the function
Application.Handle
to grab the handle of the application.
And in my client I use:
ServerApplicationHandle: = FindWindow ('TForm1', 'Form1');
To know which application should I send the message, but both return different numbers, they would know tell me why?
As already explained (Main)Form and Application are two different things.
Since Delphi 2007 there is another behavior to note.
In dependency of Application.MainformOnTaskbar you are able (or not) to get the handle via Findwindow.
A little snipplet to show the different behavior
var
FW_ah, FW_mfh, ah, mfh: THandle;
Procedure Display(OnTask: Boolean);
begin
Application.MainFormOnTaskbar := OnTask;
ah := Application.Handle;
mfh := MainForm.Handle;
FW_ah := FindWindow(PChar(Application.ClassName), PChar(Application.Title));
FW_mfh := FindWindow(PChar(ClassName), PChar(Caption));
Showmessage(Format('ah: %d FW_ah: %d - mfh: %d FW_mfh: %d', [ah, FW_ah, mfh, FW_mfh]));
end;
begin
Display(true);
Display(false);
end;
Application.Handle is the window handle for the hidden window associated with the global Application object.
FindWindow('TForm1', 'Form1') will return the window handle of a top-level form in your application.
These are indeed not the same thing. You could, I suppose, use Form1.Handle instead of Application.Handle. However, you would need to be wary of window re-creation.
Frankly this doesn't sound like the best way to do inter-process communication. Perhaps you might consider sockets or named pipes.

file_operations Question, how do i know if a process that opened a file for writing has decided to close it?

I'm currently writing a simple "multicaster" module.
Only one process can open a proc filesystem file for writing, and the rest can open it for reading.
To do so i use the inode_operation .permission callback, I check the operation and when i detect someone open a file for writing I set a flag ON.
i need a way to detect if a process that opened a file for writing has decided to close the file so i can set the flag OFF, so someone else can open for writing.
Currently in case someone is open for writing i save the current->pid of that process and when the .close callback is called I check if that process is the one I saved earlier.
Is there a better way to do that? Without saving the pid, perhaps checking the files that the current process has opened and it's permission...
Thanks!
No, it's not safe. Consider a few scenarios:
Process A opens the file for writing, and then fork()s, creating process B. Now both A and B have the file open for writing. When Process A closes it, you set the flag to 0 but process B still has it open for writing.
Process A has multiple threads. Thread X opens the file for writing, but Thread Y closes it. Now the flag is stuck at 1. (Remember that ->pid in kernel space is actually the userspace thread ID).
Rather than doing things at the inode level, you should be doing things in the .open and .release methods of your file_operations struct.
Your inode's private data should contain a struct file *current_writer;, initialised to NULL. In the file_operations.open method, if it's being opened for write then check the current_writer; if it's NULL, set it to the struct file * being opened, otherwise fail the open with EPERM. In the file_operations.release method, check if the struct file * being released is equal to the inode's current_writer - if so, set current_writer back to NULL.
PS: Bandan is also correct that you need locking, but the using the inode's existing i_mutex should suffice to protect the current_writer.
I hope I understood your question correctly: When someone wants to write to your proc file, you set a variable called flag to 1 and also save the current->pid in a global variable. Then, when any close() entry point is called, you check current->pid of the close() instance and compare that with your saved value. If that matches, you turn flag to off. Right ?
Consider this situation : Process A wants to write to your proc resource, and so you check the permission callback. You see that flag is 0, so you can set it to 1 for process A. But at that moment, the scheduler finds out process A has used up its time share and chooses a different process to run(flag is still o!). After sometime, process B comes up wanting to write to your proc resource also, checks that the flag is 0, sets it to 1, and then goes about writing to the file. Unfortunately at this moment, process A gets scheduled to run again and since, it thinks that flag is 0 (remember, before the scheduler pre-empted it, flag was 0) and so sets it to 1 and goes about writing to the file. End result : data in your proc resource goes corrupt.
You should use a good locking mechanism provided by the kernel for this type of operation and based on your requirement, I think RCU is the best : Have a look at RCU locking mechanism

GetExitCodeProcess() returns 128

I have a DLL that's loaded into a 3rd party parent process as an extension. From this DLL I instantiate external processes (my own) by using CreateProcess API. This works great in 99.999% of the cases but sometimes this suddenly fails and stops working permanently (maybe a restart of the parent process would solve this but this is undesirable and I don't want to recommend that until I solve the problem.) The failure is symptomized by external process not being invoked any more even though CreteProcess() doesn't report an error and by GetExitCodeProcess() returning 128. Here's the simplified version of what I'm doing:
STARTUPINFO si;
ZeroMemory(&si, sizeof(si));
si.cb = sizeof(si);
si.dwFlags = STARTF_USESHOWWINDOW;
si.wShowWindow = SW_HIDE;
PROCESS_INFORMATION pi;
ZeroMemory(&pi, sizeof(pi));
if(!CreateProcess(
NULL, // No module name (use command line).
"<my command line>",
NULL, // Process handle not inheritable.
NULL, // Thread handle not inheritable.
FALSE, // Set handle inheritance to FALSE.
CREATE_SUSPENDED, // Create suspended.
NULL, // Use parent's environment block.
NULL, // Use parent's starting directory.
&si, // Pointer to STARTUPINFO structure.
&pi)) // Pointer to PROCESS_INFORMATION structure.
{
// Handle error.
}
else
{
// Do something.
// Resume the external process thread.
DWORD resumeThreadResult = ResumeThread(pi.hThread);
// ResumeThread() returns 1 which is OK
// (it means that the thread was suspended but then restarted)
// Wait for the external process to finish.
DWORD waitForSingelObjectResult = WaitForSingleObject(pi.hProcess, INFINITE);
// WaitForSingleObject() returns 0 which is OK.
// Get the exit code of the external process.
DWORD exitCode;
if(!GetExitCodeProcess(pi.hProcess, &exitCode))
{
// Handle error.
}
else
{
// There is no error but exitCode is 128, a value that
// doesn't exist in the external process (and even if it
// existed it doesn't matter as it isn't being invoked any more)
// Error code 128 is ERROR_WAIT_NO_CHILDREN which would make some
// sense *if* GetExitCodeProcess() returned FALSE and then I were to
// get ERROR_WAIT_NO_CHILDREN with GetLastError()
}
// PROCESS_INFORMATION handles for process and thread are closed.
}
External process can be manually invoked from Windows Explorer or command line and it starts just fine on its own. Invoked like that it, before doing any real work, creates a log file and logs some information about it. But invoked like described above this logging information doesn't appear at all so I'm assuming that the main thread of the external process never enters main() (I'm testing that assumption now.)
There is at least one thing I could do to try to circumvent the problem (not start the thread suspended) but I would first like to understand the root of the failure first. Does anyone has any idea what could cause this and how to fix it?
Quoting from the MSDN article on GetExitCodeProcess:
The following termination statuses can be returned if the process has terminated:
The exit value specified in the
ExitProcess or TerminateProcess
function
The return value from the
main or WinMain function of the
process
The exception value for an
unhandled exception that caused the
process to terminate
Given the scenario you described, I think the most likely cause ist the third: An unhandled exception. Have a look at the source of the processes you create.
Have a look at Desktop Heap memory.
Essentially the desktop heap issue comes down to exhausted resources (eg starting too many processes). When your app runs out of these resources, one of the symptoms is that you won't be able to start a new process, and the call to CreateProcess will fail with code 128.
Note that the context you run in also has some effect. For example, running as a service, you will run out of desktop heap much faster than if you're testing your code in a console app.
This post has a lot of good information about desktop heap
Microsoft Support also has some useful information.
There are 2 issues that i could think of from your code sample
1.Get yourusage of the first 2 paramaters to the creatprocess command working first. Hard code the paths and invoke notepad.exe and see if that comes up. keep tweaking this until you have notepad running.
2.Contrary to your comment, If you have passed the currentdirectory parameter for the new process as NULL, it will use the current working directory of the process to start the new process from and not the parent' starting directory.
I assume that your external process exe cannot start properly due to dll dependencies that cannot be resolved in the new path.
ps : In the debugger watch for #err,hr which will tell you the explanation for the last error code,

Resources