Query windows event log for the past two weeks - windows

I am trying to export a windows event log but limit the exported events not according to number but according to time the event was logged. I am trying to do that on windows 7 and newer. So far my efforts are focused on using wevtutil.
I am using wevtutil and my command line now is: wevtutil Application events.evtx The problem here is that I export the whole log and this can be quite big so I want to limit it just to the last 2 weeks.
I have found this post but first of all it does not seem to produce any output on my system(yes I have changed the dates and time) and second it seems to be dependent on the date format which I try to avoid.
Here is the modified command I ran:
wevtutil qe Application "/q:*[System[TimeCreated[#SystemTime>='2012-10-02T00:00:00' and #SystemTime<'2012-10-17T00:00:00']]]" /f:text
I had to replace the < and > with the actual symbols as I got a syntax error otherwise. This command produces empty output.

The problem is due to /q: being inside quotes. It should be outside, like:
wevtutil qe Application /q:"*[System[TimeCreated[#SystemTime>='2012-10-02T00:00:00' and #SystemTime<'2012-10-17T00:00:00']]]" /f:text
This works just fine for me.

For the events of the last 2 weeks, you could also use timediff, to avoid hard-coding dates.
Windows uses milliseconds, so it would be 1000 * 86400 (seconds, = 1 day) * 14 (days) = 1209600000.
For your query, that would look like
wevtutil qe Application /q:"*[System[TimeCreated[timediff(#SystemTime) <= 1209600000]]]" /f:text /c:1
I added /c:1 to get only 1 event in the example, since there are many events in the last 2 weeks.
You may also want to only list warning and errors. For that, you can use (Level=2 or Level=3). (For some reason, Level<4 doesn't seem to work for me on Win7)
wevtutil qe Application /q:"*[System[(Level=2 or Level=3) and TimeCreated[timediff(#SystemTime) <= 1209600000]]]" /f:text /c:1

I don't know how you feel about PowerShell, but it's available on all the systems you tagged.
From a powershell prompt, see Get-Help Get-EventLog -Examples for more info.
If you have to do this from a .cmd or .bat file, then you can call powershell.exe -File powershell_script_file_name
where powershell_script_file_name has the Get-EventLog command(s) you need in it.
This example gives all the Security Event Log failures, I use to audit systems:
Get-EventLog -LogName security -newest 1000 | where {$_.entryType -match "Failure"}

I strongly recommend using LogParser for this kind of task:
logparser -i:evt file:query.sql
With query.sql containing something like this:
SELECT
TimeGenerated,EventID,SourceName,Message
FROM Application
WHERE TimeGenerated > TO_TIMESTAMP(SUB(TO_INT(SYSTEM_TIMESTAMP()), 1209600))
ORDER BY TimeGenerated DESC
The somewhat unintuitive date calculation converts the system time (SYSTEM_TIMESTAMP()) to an integer (TO_INT()), subtracts 1209600 seconds (60 * 60 * 24 * 14 = 2 weeks) and converts the result back to a timestamp (TO_TIMESTAMP()), thus producing the date from 2 weeks ago.
You can parameterize the timespan by replacing the fixed number of seconds with MUL(86400, $days) and changing the commandline to this:
logparser -i:evt file:query.sql+days=14
You can also pass the query directly to logparser:
logparser -i:evt "SELECT TimeGenerate,EventID,SourceName,Message FROM ..."

Related

How to include ProviderName in the command that gets event logs in the past ten hours

$A = #{}
$A.Add("StartTime", ((Get-Date).AddHours(-10)))
$A.Add("EndTime", (Get-Date))
$A.Add("LogName", "System")
(Get-WinEvent -FilterHashtable $A|Select TimeCreated, ProviderName, Message|FL)
The above commands will get all "System" event logs in the past 10 hours. However, I want to get only the event logs of "Microsoft-Windows-WindowsUpdateClient" in the past 10 hours. I tried the following line, which caused an error.
$A.Add("LogName", "System" ; "ProviderName", "*UpdateClient")
How should I include "ProviderName" in the command?
You have to add another key and value using Add method
$A.Add("ProviderName", "*UpdateClient")

ISDeploymentWizard.exe command (SSIS deployment ) in CMD doesn't print any indication for status

I'm running the below command in CMD for SSIS:
ISDeploymentWizard.exe /Silent /ModelType:Project /SourcePath:"C:\TEST\Integration Services.ispac" /DestinationServer:"TEST03,1111" /DestinationPath:"/TEST/DEVOPS"
and it finished successfully but with no indication to the command line. I can only check with SSMS to make sure it was really deployed. any idea why?
Solid observation here #areilma - the /silent option eliminates all status info. I had always assumed that flag controlled whether the gui was displayed or not.
If I run this command
isdeploymentwizard.exe /Silent /ModelType:Project /SourcePath:".\SO_66497856.ispac" /DestinationServer:".\dev2017" /DestinationPath:"/SSISDB/BatchSizeTester/SO_66497856"
My package is deployed to my local machine at the path specified. Removing the /silent option causes the GUI to open up with the prepopulated values.
isdeploymentwizard.exe /ModelType:Project /SourcePath:".\SO_66497856.ispac" /DestinationServer:".\dev2017" /DestinationPath:"/SSISDB/BatchSizeTester/SO_66497856"
When the former command runs, nothing is printed to the command prompt. So that's happy path deployment, maybe if something is "wrong", I'd get an error message on the command line. And this is where things got "interesting".
I altered my destination path to a folder that doesn't exist. I know the tool doesn't create a path if it doesn't exist and when I ran it, I didn't get an error back on the command line. What I did get, was a pop up windowed error of
TITLE: SQL Server Integration Services
The path does not exist. The folder 'cBatchSizeTester' was not found in catalog 'SSISDB'. (Microsoft.SqlServer.IntegrationServices.Wizard.Common)
BUTTONS:
OK
So the /silent option removes the gui to allow us to have an automated deploy but if a bad value is passed, we return to having a gui... I then repeated with a bad server name, which led to a second observation. The second I hit enter, the command line returned ready for the next command. 15 seconds later however,
TITLE: SQL Server Integration Services
Failed to connect to server .\dev2017a. (Microsoft.SqlServer.ConnectionInfo)
ADDITIONAL INFORMATION:
A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections. (provider: SQL Network Interfaces, error: 26 - Error Locating Server/Instance Specified) (Microsoft SQL Server, Error: -1)
For help, click: http://go.microsoft.com/fwlink?ProdName=Microsoft%20SQL%20Server&EvtSrc=MSSQLServer&EvtID=-1&LinkId=20476
Well now, that tells me that the actual deployment is an independent spawned process. So it won't return any data back to the command line, in any case.
Since I assume we're looking at this from a CI/CD perspective, what can we do? We could fire off a sqlcmd afterwards looking for an entry in the SSISDB catalog views to see what happened. Something like this
SELECT TOP 1 O.end_time, SV.StatusValue, F.name AS FolderName, P.name AS ProjectName FROM catalog.operations AS O
CROSS APPLY
(
SELECT
CASE O.status
WHEN 1 THEN 'Created'
WHEN 2 THEN 'Running'
WHEN 3 THEN 'Canceled'
WHEN 4 THEN 'Failed'
WHEN 5 THEN 'Pending'
WHEN 6 THEN 'Ended unexpectedly'
WHEN 7 THEN 'Succeeded'
WHEN 8 THEN 'Stopping'
WHEN 9 THEN 'Completed'
END AS StatusValue
)SV
INNER JOIN catalog.object_versions AS OV ON OV.object_id = O.object_id
INNER JOIN catalog.projects AS P ON P.object_version_lsn = OV.object_version_lsn
INNER JOIN catalog.folders AS F ON F.folder_id = P.folder_id
/*
INNER JOIN
catalog.packages AS PKG
ON PKG.project_id = P.project_id
*/
WHERE O.operation_type = 101 /*deploy project*/
AND P.name = 'SO_66497856' /*project name*/
AND F.name = 'BatchSizeTester'
ORDER BY o.created_time DESC
Perhaps a filter against end_time of within the past 10 seconds would be appropriate and if we have a result and the status is Succeeded we got a deploy. No result means it failed. I presume something similar happens when the gui runs and despite all this testing, I'm not interested in firing up a trace to fully round out this answer and see what happens behind the scenes.
If you want to negate the value of the prebuilt tool, the other option would be to use the ManagedObjectModel/PowerShell approach to deploy as you can get info from there. The other deployment option is with the TSQL Commands. The second link in my documentation section outlines what that would look like
Paltry documentation I could find
I could find no documentation as to the command line switches for isdeploymentwizard.exe
Deploy an SSIS project from the command prompt with ISDeploymentWizard.exe
Deploy Integration Services (SSIS) Projects and Packages
From #arielma's deleted answer, they found a more succinct answer saying "not possible"

How to delete the number of current messages that are only older than 30 days in WL JMS Queues using WLST

I am trying to use the cmo.deleteMessages to clean up messages that are older than 30 days.
connect(...)
domainRuntime()
print 'Cleaning Message from QUEUE:myqueue'
try:
cd('ServerRuntimes/myserver/JMSRuntime/myserver.jms/JMSServers/myserver/Destinations/JMSMODULE!JMSmyserver#myqueue')
cmo.deleteMessages("JMSTimestamp > 5200000000")
except:
pass
However Weblogic doesnt recognize the attribute "JMSTimestamp > 5200000000". It deletes all the messages.
When I put the entry "JMSTimestamp > 5200000000" in the Message Selector [in wl console], it shows up all messages instead of messages that are only 30 days old [5200000000 milliseconds is 30 days].
The problem is the format "JMSTimestamp > 5200000000" is either not recognized by Weblogic or the python script. Any idea what I am missing.
I was able to create the timestamp in milliseconds using a modified date command tool in Linux.
$ date +%s%N | cut -b1-13
1617374452236
JMS time stamp parameter accepted this format and was able to perform the task.

wvetutil event parsing for errors in last few events

I am using wevtutil to get the last 10 logs in windows servers, with this simple command.
wevtutil qe Application /rd:false /c:10 /f:text
So can I parse events like: all the error events in last 10 logs.
this will list the lasts 10 errors:
wevtutil qe application "/q:*[System[(Level=1 or Level=2 )]]" /c:10 /f:text /rd:true
be carefull the Xpath query is case sensitive

Automating scripts, Rscripts

Im having a lot of trouble of automating my .R files, and im having trouble understand the information regarding it. But here goes:
Im using windows 7 and simply want to automatically run a R.script every morning at 08.00. The .R file spits out the output by itself, so I dont want a separate output-file. I've created a bat-file like this:
"C:\R\R-3.0.1\bin\x64\Rscript.exe" "C:\R\R-3.0.1\bin\x64\Scripts\24AR_v1bat.R"
Echo %DATE% %TIME% %ERRORLEVEL% >> C:\R\R-3.0.1\bin\x64\scripts\24AR_v1.txt
When I run this manually, it works perfectly. Both with/without the:
--default-packages=list
When I run it through the cmd-window, it works perfectly. Yet when I try to run it through the task-scheduler it runs, but does not work. (I either get a 1 or 2 error in my error-message file).
I've looked at R Introduction - Invoking R from the command line, and help(Rscript) but I still can't manage to get it to work.
NEW EDIT: I found that not doing the MS SQL-call, will let my code run from the scheduler. Not sure if I should make a new question or?
EDIT: Adding the R-script
# 24 Hour AR-model, v1 ----------------------------------------------------
#Remove all variables from the workspace
#rm(list=ls())
# Loading Packages
library(forecast)
#Get spot-prices System from 2012-01-01 to today
source("/location/Scripts/SQL_hourlyprices.R")
sys <- data.frame()
sys <- spot
rm(spot)
# Ordering the data, first making a matrix with names: SYS
colnames(sys) <- c("date","hour","day","spot")
hour <-factor(sys[,2])
day <-factor(sys[,3])
dt<-sys[,1]
dt<-as.Date(dt)
x<-sys[,4]
q <-ts(x, frequency=24)
x0<- q[hour==0]
x1<- q[hour==1]
x0 <-ts(x0, frequency=7)
x1 <-ts(x1, frequency=7)
# ARIMA MODELS
y0<-Arima(x0,order=c(2,1,0))
y1<-Arima(x1,order=c(2,1,1))
fr0 <- forecast.Arima(y0,h=1)
fr1 <- forecast.Arima(y1,h=1)
h1<-as.numeric(fr0$mean)
h2<-as.numeric(fr1$mean)
day1 <-Sys.Date()+1
atable<-data.frame
runtime<-Sys.time()
atable<-cbind(runtime,day1,h1,h2)
options(digits=4)
write.table(atable, file="//location/24ar_v1.csv",
append=TRUE,quote=FALSE, sep=",", row.names=F, col.names=F)
But as I said, I can manually run the code with the batch-file and have it work perfectly, yet with the scheduler it won't work.
After hours of trying everything, it seems the problem was that I had:
source("/location/Scripts/SQL_hourlyprices.R")
Where I simply had a SQL-call inside:
sqlQuery(dbdata2, "SELECT CONVERT(char(10), [lokaldatotid],126) AS date,
DATEPART(HOUR,lokaldatotid) as hour,
DATENAME(DW,lokaldatotid) as dag,
pris as spot
FROM [SpotPriser] vp1
WHERE (vp1.boers_id=0)
AND (vp1.omraade_id=0)
AND lokaldatotid >='2012-01-01'
GROUP BY lokaldatotid, pris
ORDER BY lokaldatotid, hour desc") -> spot
When I moved this directly into the script, and deleted the source-line, the scripts would run with the scheduler.
I have no idea why....

Resources