I have a thread that runs in a dll that I dynamic link with in my main app. Is there a way to wait for all threads in an .exe (including it's loaded dll's) without knowing the thread handle? Windows 7 x64, vc++
The thread is a function that does some processing on a certain file, it is not expected to return anything, it works upon a global class that is modified in certain stages of the thread completion. The thread function calls upon other functions .
I want to wait until the last line of the function is executed.
I never did this myself, but you could probably
create a snapshot using CreateToolhelp32Snapshot
then enumerate the threads using Thread32First and Thread32Next
for each thread ID, use OpenThread to aquire a handle. Make sure that you open the thread with the SYNCHRONIZE privilege so that you can, at last
pass all thread handles to WaitForMultipleObjects to wait for all of them to terminate.
Another solution, assuming that all you want is to stop the main() function from running but not exit the process until any other threads are complete, is to call
ExitThread(GetCurrentThread());
from within main(). If you don't call ExitProcess, either explicitly or by returning from main(), Windows will not exit until the last thread exits.
Note that there is a major problem with doing this, no matter how you approach it: if one of the Windows APIs you use has launched a thread that isn't going to exit, your application won't exit either.
The proper solution is for the DLL itself to contain a shutdown function that waits for its own threads to exit if necessary.
Related
I'm working with CreateProcess to run a process/application of mine.
The purpose is to run it, do something, wait for some indication, and close it (Using TerminateProcess).
What I noticed is that this application/process creates sub-processes.
Additionally, when terminating the created process, the sub-processes do not terminate, and still remain for a period of time.
I wanted to ask if there's an option to somehow kill all the sub-processes with the main process.
It causes issues, since when I do CreateProcess again, there are leftovers from previous processes, and I think they're causing some issues.
I really appreciate your help!
Use Windows Job Objects. Jobs are like process groups; the operating system will take care of terminating all processes in the job once the job leader (your initial process) is terminated. This even works if the proess leader crashes.
When you create a process using CreateProcess you'll get a LPPROCESS_INFORMATION-pointer.
It contains the process handle. You'll need to close the processes manually, as there is no such thing as a process hierarchy as in Linux/Unix.
See here for CreateProcess and here for the PROCESS_INFORMATION-structure.
I've got a program that needs to be able to update itself. I have a second program that will perform the updates, downloading and installing. The updater will obviously need to be able to update the main program, and for that, the main program can't be running. So I want to have the main program launch the updater with a call to ShellExecuteEx, but the MSDN documentation has me a little confused.
It says that:
The SEE_MASK_NOASYNC flag must be specified if the ... process will
terminate soon after ShellExecuteEx returns. Under such conditions,
the calling thread will not be available to complete the DDE
conversation, so it is important that ShellExecuteEx complete the
conversation before returning control to the calling application.
Failure to complete the conversation can result in an unsuccessful
launch of the document.
And under SEE_MASK_NOASYNC, it says that the ShellExecuteEx call won't return until the operation is complete. What I want is to launch the updater and then immediately terminate the main program, so the updater can run without trouble. Is that the correct way to do it? And is there anything special I need to do to keep the launched updater from being marked as a "child process" that will be killed when the main process shuts down?
Do you have to call ShellExecute? I do something similar and launch via CreateProcess and it works fine.
(In reality, cmd.exe is launched which runs a batch file. The batch file waits, starts the updater and waits for it to finish, then waits a bit, then launches the main app again. Never had any trouble with it)
DDE won't be used to launch an EXE directly. (It's only used to launch certain types of files if they are regsitered as needing to be launched that way. If you're just running an EXE by name, DDE should be irrelevant.)
So you should specify SEE_MASK_NOASYNC (to make sure the ShellExecuteEx call finishes doing all it needs to do and your app is then free to end the thread as soon as the call returns) and the API should return very quickly.
here's a good CodeProject article about launching an updater:
http://www.codeproject.com/Articles/395572/Executable-Integration-Example-External-settings-a
Im analyzing a particular malware sample and it seems to be doing the following in order to get write into the process memory of explorer.exe and execute code :-
OpenProcess(on explorer.exe)
NtAllocateVirtualMemory
NtWriteVirtualMemory
CloseHandle
CreateRemoteThread
WaitForSingleObject
GetExitCodeThread
Now what I wish to do is attach a debugger to the newly created thread in explorer.exe and debug it from its entry point onward. Would that be possible?
How could I go about doing the same?
You can run another instance of debugger and attach it to exporer.exe, then look address of thread function in CreateRemoteThread parameters, and set breakpoint there.
I am working on an MFC application that can (among other things) be used to shut Windows down. When doing this, Windows of course sends the WM_QUERYENDSESSION and WM_ENDSESSION to all applications, mine included. However, the problem is that my application, as part of some destructors, delete certain files (with CFile::Remove) that have been used during the execution. I have reason to believe that the destructors are called (but that is hard to know for certain) when the application is closed by Windows.
However, when Windows starts back up again, I do occasionally notice that the files that were supposed to be deleted are still present. This does not happen consistently, even when the execution of the program is identical (I have a script for testing this). This leads me to think that one of two things are happening: Either a) the destructors are not consistently being called, or b) the Remove function returns, but the file is not actually deleted before Windows is shut down.
The only work-around I have found so far is that if I get the system to wait with the shutdown for approximately 10 seconds after my program has stopped, then the files will be properly deleted. This leads me to believe that b) may be the case.
I hope someone is able to help me with this problem.
Regards
Mort
Once your program returns from WM_ENDSESSION, Windows can terminate it at any time:
If the session is being ended, this parameter is TRUE; the session can end any time after all applications have returned from processing this message.
If the session ends quickly, then it may end before your destructors run. You must do all your cleanup before returning from WM_ENDSESSION, because there is no guarantee that you will get a chance to do it afterwards.
The problem here is that some versions of Windows report back that file handling operations have been completed before they actually have. This isn't a problem unless shutdown is triggered as some operations, including file delete will be abandoned.
I would suggest that you cope with this by forcing your code to wait for a confirmed deletion of the files (have a process look for the files and raise an event when they've gone) before calling for system shutdown.
If the system is properly shut down (nut went sudden power loss or etc.) then all the cached data is flushed. In particular this includes flushing the global file descriptor table (or whatever it's called in your file system) which should commit the file deletion.
So the problem seems to be that the user-mode code doesn't call DeleteFile, or it failes (for whatever reason).
Note that there are several ways the application (process) may exit, whereas not always d'tors are called. There are automatic objects which are destroyed in the context of their callstack, plus there are global/static objects, which are initialized and destroyed by the CRT init/cleanup code.
Below is a short summary of ways to terminate the process, with the consequences:
All process threads exit conventionally (return from their procedure). The OS terminates the process that has no threads. All the d'tors are executed.
Some threads either exit via ExitThread or killed by TerminateThread. The automatic objects of those threads are not d'tructed.
Process exited by ExitProcess. Automatic objects are not destructed, global may be destructed (this happens in the CRT is used in a DLL)
Process is terminated by TerminateProcess. All d'tors are not called.
I suggest you check if the DeleteFile (or CFile::Remove that wraos it) is called indeed, and check also if it succeeds. For instance you may open the same file twice for whatever reason
My application creates a suspended process, gets process's information via VirtualQueryEx() ,but fails getting process's module information using EnumProcessModules().
The task above is completed ONLY if the process is NOT created suspended and a breakpoint is hit in the debugger(so the program runs, before the call is executed).
I'm trying to write a very decent disassembler and for that I would need to run a target process suspended, but EnumProcessModules() does not work on suspended processes.
Is there an alternative?
I dealt with something like this several years ago. If I remember right, what I ended up doing was creating the task suspended, then GetThreadContext, set its trap flag, SetThreadContext, resume the thread (which runs one instruction), then use EnumProcessModules.
Of course, there may be other ways to handle this, but at least if memory serves, that's what I came up with at the time and I seem to recall its working.