Piping / Filtering Windows DNS Server logs - windows

I am looking to log all the unique hosts which have had any transaction with my Windows DNS Server.
I found that there is an option to log my DNS server transactions via the Set-DnsServerDiagnostics PS command.
However - it is quite heavy and I am not interested in most of the data there. I just care about the host name, for example www.google.com
I was wondering if there's an option to create a File pipe which consumes the log data, and filters it - resulting in a file which contains domain names only.
I saw that I could specify the file path with the -LogFilePath argument - it may help.
Any help / ideas will be appreciated!

Related

Clarion based application in WinServer 2016, frozen. Suggestions needed on tools we could use to gather more information

I have a clarion application running in Win Server 2016 talking to a sybase DB, over the past few weeks we find the application gets frozen for different users at a given time. However the user can leave the session as such and start a new one and that works good. The users are known to use multiple instances of the same application in one remote server or on multiple servers. Having said that, I wanted to get more information on the freezeup and looked through the application event logs in the system where I see explorer.exe crashes but these correlate to the time of occurrence of the issue at certain times but not always, checked the DB transaction logs from Sybase and I do not find any crashes, errors or stuck connections. Having said that, since I have exhausted all possible options I am reaching out to you guys to know if there are any other places that i can look for to gather more information.
I would love to know of any application / tools that we could use to gather logs of a frozen clarion application on windows. Also good to know if anyone has faced such a situation and where and how have you guys looked into the issue.
Thanks in advance for your help in this.
Clarion's runtime library and database drivers expect a persistent connection. Disconnects that are normal with remote ODBC can cause a problem (including app hangs) unless you test for them at the ABC file mgr level and reconnect, or use similar steps to test and recover.
If you're looking for specifics about what's going on between the driver and the SQL backend, I suggest using Clarion's database driver trace facilities. From the help topic: "Logging Driver I/O for debugging":
To view the trace details in debugview, name the target trace file "DEBUG:"
Logging opens the named logfile for exclusive access. If the file exists, the new log data is appended to the file.
On Demand Logging
For on-demand logging you can use property syntax within your program to conditionally turn various levels of logging on and off. The logging is effective for the target table and any view for which the target table is the primary table.
file{PROP:Profile}=Pathname !Turns Clarion I/O logging on
file{PROP:Profile}="DEBUG:" !Turns Clarion I/O logging on and
!sends output via OutputDebugString()
!(viewable via debugview, etc)
file{PROP:Profile}='' !Turns Clarion I/O logging off
PathName = file{PROP:Profile} !Queries the name of the log file
file{PROP:Log}=string !Writes the string to the log file
file{PROP:Log}="DEBUG:" !Writes the string to the log file
file{PROP:Details}=1 !Turns Record Buffer logging on
fFile{PROP:Details}=0 !Turns Record Buffer logging off
where Pathname is the full pathname or the filename of the log file to create. If you do not specify a path, the driver writes the log file to the current directory.
You can also accomplish on demand logging with a SEND() command and the LOGFILE driver string. See LOGFILE for more information.
Example I use frequently, which was based on the help above:
SYSTEM{PROP:DriverTracing} = '1'
CRMNotes{PROP:TraceFile} = 'DEBUG:'
CRMNotes{PROP:Details}=1
CRMNotes{PROP:Profile}= 'DEBUG:'
CRMNotes{PROP:LogSQL} = 1

Write a Windows Event log entry based on a source CSV log file and query

SCENARIO
I've been researching on Google ways to do this but I'm not finding anything, so I really hope SO can help out a netadmin from SF. I see a lot of ways to export FROM the Windows event logs, and ways to write events for custom written apps, but nothing so far for taking existing log files and "converting" them to eventvwr entries.
I'm working on an issue where I need a way to receive notifications/alerts based on the Windows Server DHCP audit logs: http://technet.microsoft.com/en-us/library/dd759178.aspx
The logs are written to C:\windows\system32\DHCP as DhcpSrvLog-Fri ,etc. and auto-rotate on their own.
I need information from this log (particulary event 10's which show a new lease to a client), and will be querying it and comparing it against AD and then either writing to a new CSV file or writing a new eventvwr entry directly.
WHAT'S NEEDED
END RESULT: The end goal here is to receive an email notification when a non-AD joined device gets a DHCP lease address from our DHCP server. More specific details can be found here: https://serverfault.com/questions/550653/windows-dhcp-server-get-notification-when-a-non-ad-joined-device-gets-an-ip-ad
However, in regards to this particular question, what I'm looking for is understanding of how to take an existing file (csv) and write a custom Windows event log entry based on its contents.
I can't seem to find ways to take an existing file as input. Would I have to write something that parses through the DHCP server audit logs and creates various variables that get included in something like Write-Eventlog? If Powershell is the wrong path to go down, I'm open to suggestions.
The DHCP server logs are not built the same way as an event, so you have to parse the csv file and create events manually. Since there wasn't any real question here I'll just provide a sample to get you going(untested and incomplete):
Import-Csv filename | Where-Object { $_.ID -eq 10 } | ForEach-Object {
#If you're using quest module for AD management
if(!(Get-QADComputer -Name $($_."Host Name"))) {
Write-EventLog ........
}
}

DB job to generate/email Oracle report output

The task is to have an Oracle report generated daily, automatically, and e-mailed to a user.
So I've sort of got this working (it works if I hardcode one of the reports server names below).
I created a job on the database that will generate the report. I'm able to get the report to email as a PDF to the destination with this command:
UTL_HTTP.REQUEST('http://server/reports/rwservlet?server=specific_report_server &report='||p_report_name||'&userid='||p_connstring||'&destype=mail'||p_parameters||'&desname='||p_to_recipientlist||' &cc='||p_cc_recipientlist||'&bcc='||p_bcc_recipientlist||'&subject=%22' || REPLACE(p_subject,' ','%20') || '%22&paramform=no&DESformat=pdf&ENVID='||p_envid);
That works great...
The problem however is that my organization has two report servers that are load balanced. Our server team could take down one of the servers without really any warning, so I can't just hardcode the report server name (the ?server= parameter above) with one of the report server names because it will work for a while, then when that server goes down, it will stop working.
My server team asked me to look for a way to pull the server from the formsweb.cfg file or from default.env value within the job (there are parameters in there that hold the server name). The idea there is that the "http://server" piece will direct the report to be run on the appropriate server, and the first part of the job could get the reports server name from the config file that the report is run on. I'm not sure if this is possible from the database level, or how to do this. Any ideas?
Is there a better way that this can be done, perhaps?
If there are two load-balanced servers, that strongly implies that the network folks must have configured some sort of virtual IP (VIP) for the service. You (and everyone else) should be using that VIP rather than a specific server name.
For example, if you have two servers reportA.yourdomain.com and reportB.yourdomain.com, you would almost certainly create a VIP for reports.yourdomain.com that load balances between the two servers (and knows whether one of the servers is down or whether a new reportC server has been added). This VIP would either do the load balancing itself or would point to an actual physical load balancer that distributes the traffic. All applications would reference the reports.yourdomain.com VIP rather than any hard-coded server names.

check directory of oracle logs

I'm using the check_logfiles nagios plugin to monitor Oracle alert logs. It works wonderfully for that purpose.
However I also need to monitor and entire directory of oracle trace logs for errors. This is because the oracle database is always creating new log files with different names.
What I need to know is the best way to scan an entire directory of oracle trace logs to find out which ones match patterns that specify oracle alerts.
Using check logfiles I tried specifying these options -
--criticalpattern='ORA-00600|ORA-00060|ORA-07445|ORA-04031|Shutting
down instance'
and to specify the directory of logs -
--logfile='/global/cms/u01/app/orahb/admin/opbhb/udump/'
and
--logfile="/global/cms/u01/app/orahb/admin/opbhb/udump/*"
Neither of which have any effect. The check runs but returns ok. Does anyone know if this nagios plugin called check_logfiles can monitor a directory of files rather than just a single file? Or perhaps there is another, better way to achieve the same goal of monitoring a bunch of files that can't be specified ahead of time?
Use a script which:
Opens each file
Copies entries which match the pattern
Outputs the matches to a file

What is good/free software for monitoring IIS in Windows Vista?

I always forget to check what's going on in IIS on our webservers, and am wondering: is there some stupid applet or something that always runs locally that I can click on to check event logs and IIS logs on a remote machine?
Mark
You can set up samurize to follow the output of the logging on the local and remote machines but it requires some setup.
You can use a remote shell utility such as OpenSSH to connect to remote machines securely.
One at a time. Compmgmt.msc -> connect to another computer.
But one at a time is boring. Monitoring dozens of machines? I've been using logparser from MS for my log monitoring needs. I run a query that dumps errors and warnings to a csv file a few times a day.
So far, I've only used it to aggregate event logs across the dozen servers in our QA environment, but it appears to take many forms on log input, including IIS. A pseudo log file query (don't have samples with me)
logparser "Select [eventtype], [message] into output.csv FROM \\server1\system, \\server2\system" -i EVT
This shows: You can aggregate multiple servers. You tell it the input format - it supports a dozen log types. You can dump it into a csv file. It looks sort of like SQL. This article in security focus has an IIS log sample.
I'm not an applet type of guy, so I haven't though much about desktop widgets to do this.

Resources