Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 2 years ago.
Improve this question
How can I set an e-mail alert for Starting & Stopping Windows Service?
I have Tomcat running in Windows Server 2008 R2 as Service, I want to setup e-mail alerts whenever the service starts/stopped/re-started.?
is there any Powershell commands? Any Windows event based triggers?
There are many solutions possible (like using Nagios to monitor service status and send an alert).
If you want to use PowerShell...
SOLUTION1: Use Recovery tab to schedule an action ("Run a program"). That action should be a PowerShell script that sends your email...
NOTE: As Jacob kindly said in a comment, action will be triggered only if the service stops by an error.
SOLUTION2: Another possible solution is having a Schedule Task every x time (your choose... 1 min is the minimum) that runs a PowerShell Script. That scripts would:
Check Service Status and compare with previous state (use Get-Service cmdlet)
If the state changes, change previous state and send an email.
You can find here someone that had exactly the same problem and fixed it with Powershell.
Instead of scheduling a recurring task as #cad suggests, you can create a task that will be triggered based on the eventlog :
you could configure it to start on event 7036 of source Service Control Manager from System logfile.
This task can run a powershell script where you can send mail. Short example
Get-EventLog -LogName system -Source "service control manager" -After (get-date).AddMinutes(-5) |?{$_.message -match "tomcat"} | select timegenerated, message
You may want to look into creating a WMI eventing (Win32_Service Class). I haven't used this feature in quite a long time but it look like powershell is making it easier now. This would provide immediate alerting and not require a separate polling process.
https://devblogs.microsoft.com/scripting/an-insiders-guide-to-using-wmi-events-and-powershell/
Related
Firstly sorry for my poor english. I don't really know how to formulate the question, but I can explain you my intentions so it may help you to understand me better.
Im developing tool that notifies you when a windows service goes down.
The exact logic that I follow is:
When a service goes down gracefully logs an event that you can see in windows event viewer. I've created a sheduled task that will be triggered when the service is stopped according to windows event log (Thanks to a XML filter).
This task triggers a powershell script that sends a request to a telegram bot that will notify me when the service dies.
This process works perfectly when I manually stop the service (From service.msc or Powershell's Stop-Service). The objective is to have a realtime track of the service, and in this case works correctly.
The problem comes here: I cannot force the service to crash in order to see if it logs information in windows event viewer.
My questions are:
If an error occurs will windows shut the service down gracefully (like when using Stop-Service) or will it kill the process without registering any log info (like when using taskkill /f)?
Any other suggestions? Is there another way to track a windows service in real time and trigger a script without a loop that runs every certain time.
Hope y'all understand me :)
If a service crashes, you should still see an error message in the event log under Windows Logs > System. The Source will be "Service Control Manager" and Event ID should be either 7031 or 7032 or 7034.
So you can add a filter for these and have your PowerShell script run on these kinds of events as well.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 8 months ago.
Improve this question
we have a custom windows service which runs on a user account. Whenever we reboot the server, the service stops. To start the service again, we have to enter the password again in the service' Log On tab. What is causing this and how to resolve the issue?
The behavior you describe can occur when service user is in a domain and the domain policy periodically overwrites the local policy, dropping the "Log on as a service" right for that user.
To fix the problem, edit your domain group policy (with gpmc.msc) and ensure that the service's user has the "Log on as a service" right.
The Microsoft Windows Service Control Manager controls the state (i.e., started, stopped, paused, etc.) of all installed Windows services. By default, the Service Control Manager will wait 30,000 milliseconds (30 seconds) for a service to respond. Certain configurations, technical restrictions, or performance issues may result in the service taking longer than 30 seconds to start and report ready to the Service Control Manager.
By editing or creating the ServicesPipeTimeout DWORD value, the Service Control Manager timeout period can be overridden, thereby giving the service more time to start up and report ready to the service.
How to make it ?
Go to Start > Run > and type regedit
Navigate to: HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control
With the control folder selected, right click in the pane on the right and select new DWORD Value
Name the new DWORD: ServicesPipeTimeout
Right-click ServicesPipeTimeout, and then click Modify
Click Decimal, type '180000', and then click OK
Restart the computer
Note: The recommendation above increases the timeout to 180,000 milliseconds (3 minutes), but this may need to be increased further depending on your environment. Keep in mind that increasing this value will likely yield longer server boot times.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 years ago.
Improve this question
I am working in a project developed on Laravel 4.2.
I am not able to get the idea about queuing service of laravel. I have read many documents about it but still things are not clear.
Should I compare Queue and cron job ?
when we put a cron job on server, We mention a time when the cron will run. But in the case of queue I could not find the place where time of run is mentioned.
There are some files in App/command directory and code is running on my server but I am helpless to find the time of run OR how to stop these queues.
Please guide me about this problem.
Queue is a service where you add the tasks for later.
Generally you ask other service provider like iron.io to call your application to have the task processes asynchronously and repeat the call if it fails the first time. This allows you to respond to the application user quickly and leave the task to be processed in the background.
If you use local sync driver then task will be done immediately during the same request.
I am running a software on several computers at my workplace and the software can run different audio and video files stored on a shared folder in a central computer. The software runs on windows 7 and every person in my company can add or remove files from the shared folder, but this privilege puts the data at risk. I was thinking of creating an email alert to my self whenever a file is deleted. I have written a windows powershell script for sending me emails from smtp server but how can I hook it up to the event of file or folder deletion in a specific shared folder?
Honestly, if you want real time monitoring (I'm guessing you do since you're wanting an email alert sent to you when you detect file deletion), then the hardest part is going to be keeping the script running...
Anyway, the first two things you're going to need to do are
1) Enable the Audit Policy "Audit Object Access" on the server hosting the share
2) Enable auditing for the user/group you're monitoring
After that, you're going to want to use the Get-Eventlog cmdlet to search for event ID 4663 (you can also use event IDs 4656 and 4658 to correlate the event - they represent the opening and closing of a given file).
Anyway, after you've enabled auditing, use something like this get started:
Get-EventLog security | Where-Object {$_.EventID -eq 4663}
Oh, and to keep it running, you'll probably want to use a scheduled job.
Or you could use an IO.FileSystemWatcher object and register an event.
Grts.
At my workplace, we have lab machines that we use to do our testing.
The standard procedure to reserve a machine for testing was to walk around the office to make sure that no one was using the machine.
This is highly inefficient and time consuming.
At first, I set up a web page where people could reserve the lab machine but nobody was keeping the page updated so that turned up to be useless.
I finally found a solution using Microsoft log parser and wanted to share it to the stack overflow community.
It is a batch file that runs on the machine so the user can identify the last users that use the machine and easily IM them to ask if the machine is free.
Is there a better solution to do this?
Use the built-in command qwinsta (Query Win Station) to figure out what sessions (including console) are active or inactive (disconnected) and then act on the given information (creds to krusty.ar btw for linking this already).
If you feel people are abusing the machine in question, refer to rwinsta to nuke their sessions into oblivion...
You will need to install the Microsoft Log Parser
Then create the following 2 files
TSLoginsDetails.sql
SELECT
timegenerated,
EXTRACT_TOKEN(Strings,1,'|') AS Domain,
EXTRACT_TOKEN(Strings,0,'|') AS User,
EXTRACT_TOKEN(Strings,3,'|') AS SessionName,
EXTRACT_TOKEN(Strings,4,'|') AS ClientName,
EXTRACT_TOKEN(Strings,5,'|') AS ClientAddress,
EventID
FROM Security
WHERE EventID=682
ORDER BY timegenerated DESC
TSLogins.bat
echo off
cls
c:
cd "c:\Program Files\Log Parser 2.2\"
logparser.exe file:TSLoginsDetails.sql -o:DATAGRID
Now by placing this batch file on the desktop, the user can see who were the last people to login and contact them by IM to verify if they are done.
How about posting the information from the log file to the website that tells who is currently using the machine as well.
Check and notify when they log in.
Updated the "who is using the machine" page you made prior.
Run a AT job that checks every couple of hours who is on it.
Totally out of the box:
You can install the Software Testing Automation Framework (STAF) on your servers and desktops to manage your tests. It's written in Java, so you can use it on Windows and Unix/Linux desktops and servers.
Using STAF, you can create a resource pool of test servers on which you conduct tests, then write STAX jobs (STAX is a STAF execution framework) to conduct the tests. The job can grab the first available server from the resource pool, run the test, monitor the test status, log results, notify the submitter, then release the server back into the pool when done. If you have multiple people submitting jobs for tests, STAF will manage the queue of requests and satisfy them as they came in. Users can either monitor the job from their desktop, or you can set up email alerts to notify them when the test is complete.
I'm not sure if I understand you, but there are a set of command line tools to deal with terminal server sessions, and there's also a Windows API to do the same if you need to do this from a program.
Since it sounds like you're a microsoft shop, you can set up the machines as resources in outlook/exchange and reserve them that way.