I want to collect programmatically all the processes that were run (created) while my program monitors the system.
I thought using wpr (https://learn.microsoft.com/en-us/windows-hardware/test/wpt/) for collecting the data.
Is there a way to specify a filter to collect only process creation events?
Are there other tools/sdk for getting all process creation events?
Found the following which provide a good starting point: https://www.ired.team/miscellaneous-reversing-forensics/windows-kernel-internals/etw-event-tracing-for-windows-101
Related
I know that it is possible to find it manual through Workflow Monitor, but we are developing automatic process.
This process must get information of a several workflow runs (start and finish date-time, status)
I found that with SOAP we could get details about workflow run, but only about current/last or with specific workflow_run_id, but I can't find how to get list of previous workflow_run_id for workflow.
I have a process (it is a windows service). It throws bad_alloc exception and stops. Later it is being started by another monitoring tool. I want to see the memory related details specific to that process just before it stops.
The tools like Process explorer, VMmap can be used for running processes. But, as my process stops we loose the data here. Is there any way to log the data of this process till it stops/ till some time period?
I tried 2 options in VMmap for the same.
(a) View Running process option works fine, but it needs regular 'Refresh' from user and During refresh if the process is stopped/restarted (now it is with new PID) the previous data are lost.
(b) Launch and trace a new process(here I have option of auto refresh after each second) -But it is not able to initiate my windows service.
Could you please suggest if there are any other ways for it?
I referred multiple articles for this , but none of them helped in my case.
The reason to capture logs is- these services are in production system on customer machines, so cannot analyse at the time of issue.
I am using Performance Monitor (PerfMon) to capture data specific to my process for every 10 minutes. It gives me both historic data as well as the current data.
I have a .NET application which spawns multiple child 'worker processes'. I am using the Windows Job Object API and the JOB_OBJECT_LIMIT_KILL_ON_JOB_CLOSE setting to ensure the child processes always get killed if the parent process is terminated.
However, I have observed a number of orphaned processes still running on the machine after the parent has been closed. Using Process Explorer, I can see they are correctly still assigned to the Job, and that the Job has the correct 'Kill on Job Close' setting configured.
The documentation for JOB_OBJECT_LIMIT_KILL_ON_JOB_CLOSE states:
"Causes all processes associated with the job to terminate when the last handle to the job is closed."
This would seem to imply that a handle to the Job was still open somewhere... I did a search for handles to my Job object, and found instances of WmiPrvSE.exe in the results. If I kill the relevant WmiPrvSE.exe process, the outstanding handle to Job is apparently closed, and all the orphaned application processes get terminated as expected.
How come WmiPrvSE.exe has a handle to my Job?
You may find this blog in sorting out what WmiPrvSE is doing.
WmiPrvSE is the WMI Provider host. That means it hosts WMI providers, which are DLLs. So it's almost surely the case that WmiPrvSE doesn't have a handle to your job, but one of the providers it hosts does. In order to figure out which provider is the culprit, one way is to follow the process here and then see which of the separate processes holds the handle.
Once you have determined which provider is holding the handle you can either try to deduce, based on what system components the provider manages, what kind of query would have a handle to your Job. Or you can just disable the provider, if you don't care about losing access to the management of the components the provider provides.
If you can determine what kind of query would be holding a handle, you may be able to deduce what program is issuing the query. Or maybe the eventlog can tell you that (first link above).
To get more help please provide additional details in the OP, such as which providers are running in WmiPrvSE, any relevant eventlog events, and any other diagnostics info you obtain.
EDIT 1/27/16
An approach to find out what happened that caused WMIPrvSE to obtain your job's handle is to use Windbg's !htrace extension. You need to run !htrace -enable after you load you .EXE but before you execute it in Windbg. Then you can break in later and execute !htrace <handle> to see stack traces when the handle was manipulated. You may want to start with this article on handle implementation.
I need to track to a log when a service or application in Windows is started, stopped, and whether it exits successfully or with an error code.
I understand that many services do not log their own start and stop times, or if they exit correctly, so it seems the way to go would have to be inserting a hook into the API that will catch when services/applications request a process space and relinquish it.
My question is what function do I need to hook in order to accomplish this, and is it even possible? I need it to work on Windows XP and 7, both 64-bit.
I think your best bet is to use a device driver. See PsSetCreateProcessNotifyRoutine.
Windows Vista has NotifyServiceStatusChange(), but only for single services. On earlier versions, it's not possible other than polling for changes or watching the event log.
If you're looking for a user-space solution, EnumProcesses() will return a current list. But it won't signal you with changes, you'd have to continually poll it and act on the differences.
If you're watching for a specific application or set of applications, consider assigning them to Job Objects, which are all about allowing you to place limits on processes and manage them externally. I think you could even associate Explorer with a job object, then all tasks launched by the user would be associated with your job object automatically. Something to look into, perhaps.
I have a farm of several physical servers each running a large number of Ruby "workers" (daemon-like processes) and I'd like to be able to monitor the health and progress of these processes from a central location, perhaps with historical graphing like Cacti provides. What's the simplest preferably-open-standard protocol for doing something like that? Please note I'm already using monit to keep the processes up and running and under control; what I'm asking for here is a single point of entry (i.e. dashboard) for checking in on them. Thanks.
If you are already using Monit then M/Monit sounds like a perfect match.
"M/Monit expand upon Monit's capabilities to provide monitoring and management of all Monit enabled hosts from one simple to use web-interface. " - http://mmonit.com/
G'day,
What about having a monitoring process on each server that checks the status of each process and then writes that out to a flat text file, say once every five minutes.
Then another process located on a central server can retrieve at those flat files and trawl through the results and flag any issues.
If you save the individual files and timestamp them, you would also be able to see any trends forming.
Just a quick ideea.
BTW The above system is used to monitor the servers in one of the largest websites in the world. Our scripts are written in Perl with a little bit of shell script but I don't see why you couldn't write your monitoring scripts in Ruby as well.
HTH
cheers,
I'd suggest to take a look at Zabbix.
It's not as simple as monit, of course, but it allows you to run data collecting agent on each of your servers, with all agents feeding the central reporting and storage server with their data. Those agents can use any custom scripts to get the metrics - you can write simple scripts to extract the data you need from your workers, send it back to the central reporting server and display it there on the dashboard.