Batch script to block IP in windows firewall - windows

I am creating a batch script to deal with the brute forcing through RDC on my server.
The way I plan on doing this is by making windows run my script when a failed RDC is logged in windows security log. I will add a whitelist to prevent trusted IP addresses from being blocked. It will also log how many times the IP has tried to login, and if it fails more than 5 times in a row, it will be blocked.
Problem is, I don't know how to pass the IP from the logged event to the batch script.
The Windows Server 2008 R2 event viewer has an ability to start a program when a specific event (With a event number) occurs. It has a box for arguments that can be passed onto the program that I can specify (which will be this batch script). However, It does not specify what arguments it can pass on (I want the IP to be passed onto the batch script... that is all).
Any help would be greatly appreciated.

I managed to fix it myself, using this. I adapted the one he had over to include $(IpAddress) and got it working after a bit of mucking around.

Related

remote wevtutil "The account is not authorized to log in from this station."

I am responsible for running centralized backups of Windows Security logs on a network of ~15 Windows boxes. To automate this task, I have been writing a Powershell script that utilizes wevtutil's /r parameter to do it all remotely. All of the boxes are connected to a Sharepoint network drive that I was hoping to copy the logs to so that I could centralize all of the logs, but I've run into some trouble.
The script runs fine when I pass it the ip of the Windows box that it's running on. The logs are copied to the Sharepoint without a hitch. The script also runs fine when I just tell it to copy the logs locally. However, when the script trys to remotely copy the computer's log to the sharepoint, I get a Failed to Archive Security log. The account is not authorized to log in from this station. error.
The format of the command is
wevtutil epl Security \\path\to\sharepoint\[hostname]-[datetime]Security.evtx /r:[hostname]
I am running the script as a domain admin. I have also ran the script with the credentials of a local admin and I got a generic access denied error.
A google search for the error message mostly includes computers not being able to access network drives (not remote wevtutil specifically) and include a lot of messing around in
HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\LanmanWorkstation\Parameters
or
HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\LanmanServer\Parameters
which have not yielded any results. I am also restricted to Powershell 1, so I cannot use Powershell itself remotely AFAIK.
I could just put the script on each machine and run it locally with task scheduler, but I was hoping for a more elegant solution. Does anyone have experience with using wevtutil in this way and can point me in the right direction, or perhaps even suggest a better technique/tool?
How much latitude do you have to implement another solution?
If you wanted to do something with the logs, or make them easily searchable, you could set up a free Splunk server and either use a Splunk forwarder to ship the logs off box, or you could also use Powershell to send the logs to Splunk's HTTP event collector.

send argument/command to already running Powershell script

Until we can implement our new HEAT SM system i am needing to create some workflows to ease our currently manual user administration processes.
I intend to use Powershell to execute the actual tasks but need to use VBS to send an argument to PS from an app.
My main question on this project is, Can an argument be sent to an already running Powershell process?
Example:
We have a PS menu app that we will launch in the AM and leave running all day.
I would love for there to be a way to allow PS to listen for commands/args and take action on them as they come in.
The reason I am wanting to do it this way is because one of the tasks needs to disable exchange features and the script will need to establish a connection a remote PSsession which, in our environment, can take between 10-45 seconds. If i were to invoke the command directly from HEAT (call-logging software) it would lock up while also preventing the tech from moving on to another case until the script terminates.
I have searched all over for similar functionality but i fear that this is not possible with PS.
Any suggestions?
I had already setup a script to follow this recommendation but i was curious to see if there was a more seamless approach
As suggested by one of the comments by #Tony Hinkle
I would have the PS script watch for a file, and then have the VBScript script create a file with the arguments. You would either need to start it on another thread (since the menu is waiting for user input), or just use a separate script that in turn starts another instance of the existing PS script with a param used to specify the needed action

WMIPRVSE needs to be run under network services by default

I have 2 separate servers (windows server 2008 r2) from where I am running vbs scripts through a microsoft scheduler ( my-computer>manager>Schedule). when I run vbs scripts locally they are working fine, but when it is being run through scheduler one of servers is getting stacked. while the other is working fine. And also I have noticed from task manager that the working server runs the WMIPRVSE.exe though Network Service user and the other one shows SERVICES as user.
How to make sure that WMIPRVSE.exe will always run under Network Services. Thanks
Edit:
I have tried to change the log on user from services, but it failed to start the service than.
There are a few things I have tried, but I don't know which one helped me.
What I did is I granted all permissions to wscript file which is located in system32 somewhere, and after some time it became Network Servies. Again not really sure whether it was because of that change or other thing.

Printing from an application in IIS to a networked printer on server

I have a line of code that I can run locally as part of a service that works perfectly fine.
sReportPath = objCrystalUtils.ExportReportToPDF("Report Name", iReportInfoID)
This code is run as a part of a service, and when I unit test it by feeding it data, it ultimately builds the report and prints it.
When I run the exact same piece of code inside an .ashx from an ajax call. The reports are generated (I can see the pdf files being created on disk) but the printing is not happening.
oRpt.PrintToPrinter(objReport.DefaultAutoPrint, True, 0, 0)
In both scenarios the same code is used to print the report. (objReport.DefaultAutoPrint = 0 in both cases)
My only thought is that the location of the code that is calling this method is in a different spot relative to the location of the bills themselves.
The printer that I'm trying to print to is a network printer intalled on my machine, and I'm running Windows 7 IIS 6.1
Any thoughts?
Edit:
Here is a thought... if I'm running one as a unit test locally and im running the other through a web app that is running via IIS, is there a difference in user id and user access to the default printer?
Edit:
So I added my local ASP, IUSR and SYSTEM users to the printer security and allowed them to print... no dice. So I checked the EVERYONE user and it is set to access and NO users are denied... so I think that kinda kills that line of reasoning.
Edit:
I changed the name of this post since I no longer think that the issue is ajax related since If I try to do the same process in code bebehind from a post back instead of running it from an ajax call i still get the same problem.
Patrick, for me it is a known issue of crystal reports, printing a certain report from a running application via IIS.
I got the same issue before, and upon our search for that issue, we got the following;
Report to be generated, exported, and then to be downloaded to client machine,
so user can print it locally (say, report will be exported as PDf file,
user can use print option of PDF reader).
It's not Crystal Reports or other third party app's problem. It's usually the IIS_IUSER's permission problem because it has no access to any network printers. A possible solution is in Process.Start doesn't work in IIS

service doesn't behave the same as command line

I am running on a Windows Server 2003. This is my problem:
I wrote a Perl script to automate the copy of some files from my Server machine to some network drives. I am using xcopy to copy the files. My problem is the permissions.
If I run the script from the command line, it works, all the copies are successful.
If I try to run the script using a service all the copies fail. This service is a program that I wrote that takes the script and runs it. In the background all it is doing is to call the C function 'system' and it runs the same program that I can run from the command line.
I have tried many variations of this to figure out what is wrong with it but I can't see why the service would not run the same way I run it from the command line.
I set up the service to run as the same user I am using from the command line.
I also tried to map the network drives as the user that has writing permission but the result is the same. Manually the script works, from the service, it doesn't.
Any suggestion is appreciated.
Thanks
Tony
The service may be running as the system and not have access to the network drives. In the Service settings, change the service to run under your account (or an account with the relevant permissions/mappings).
When the service runs, it uses whatever credentials you specify in the Services manager of Windows. The default, LOCAL SERVICE, probably does not have permission to access the resources to be copied.
Create a new user account with the minimum set of permissions needed to perform the copy and configure your service to run under that account.
I did figure out the issue (I think), and that matches what I later found in another post:
https://serverfault.com/questions/4623/windows-can-i-map-a-network-drive-for-a-service-account
<...Persistent drive mappings are only restored during an interactive login, which the service does not use. I believe the only way to get a service to use a network drive is for that service to map the drive itself or alternatively for it to us a UNC path instead of a mapped drive.>
What I did was mapping the drive using the service and that seems to work. It turns out, if I map the drive and save credentials, then I can access later the drive without having to map it again. I don't know why this approach seems to work though.
-Thanks everybody for your help.
Tony

Resources