Automating scripts, Rscripts - windows

Im having a lot of trouble of automating my .R files, and im having trouble understand the information regarding it. But here goes:
Im using windows 7 and simply want to automatically run a R.script every morning at 08.00. The .R file spits out the output by itself, so I dont want a separate output-file. I've created a bat-file like this:
"C:\R\R-3.0.1\bin\x64\Rscript.exe" "C:\R\R-3.0.1\bin\x64\Scripts\24AR_v1bat.R"
Echo %DATE% %TIME% %ERRORLEVEL% >> C:\R\R-3.0.1\bin\x64\scripts\24AR_v1.txt
When I run this manually, it works perfectly. Both with/without the:
--default-packages=list
When I run it through the cmd-window, it works perfectly. Yet when I try to run it through the task-scheduler it runs, but does not work. (I either get a 1 or 2 error in my error-message file).
I've looked at R Introduction - Invoking R from the command line, and help(Rscript) but I still can't manage to get it to work.
NEW EDIT: I found that not doing the MS SQL-call, will let my code run from the scheduler. Not sure if I should make a new question or?
EDIT: Adding the R-script
# 24 Hour AR-model, v1 ----------------------------------------------------
#Remove all variables from the workspace
#rm(list=ls())
# Loading Packages
library(forecast)
#Get spot-prices System from 2012-01-01 to today
source("/location/Scripts/SQL_hourlyprices.R")
sys <- data.frame()
sys <- spot
rm(spot)
# Ordering the data, first making a matrix with names: SYS
colnames(sys) <- c("date","hour","day","spot")
hour <-factor(sys[,2])
day <-factor(sys[,3])
dt<-sys[,1]
dt<-as.Date(dt)
x<-sys[,4]
q <-ts(x, frequency=24)
x0<- q[hour==0]
x1<- q[hour==1]
x0 <-ts(x0, frequency=7)
x1 <-ts(x1, frequency=7)
# ARIMA MODELS
y0<-Arima(x0,order=c(2,1,0))
y1<-Arima(x1,order=c(2,1,1))
fr0 <- forecast.Arima(y0,h=1)
fr1 <- forecast.Arima(y1,h=1)
h1<-as.numeric(fr0$mean)
h2<-as.numeric(fr1$mean)
day1 <-Sys.Date()+1
atable<-data.frame
runtime<-Sys.time()
atable<-cbind(runtime,day1,h1,h2)
options(digits=4)
write.table(atable, file="//location/24ar_v1.csv",
append=TRUE,quote=FALSE, sep=",", row.names=F, col.names=F)
But as I said, I can manually run the code with the batch-file and have it work perfectly, yet with the scheduler it won't work.

After hours of trying everything, it seems the problem was that I had:
source("/location/Scripts/SQL_hourlyprices.R")
Where I simply had a SQL-call inside:
sqlQuery(dbdata2, "SELECT CONVERT(char(10), [lokaldatotid],126) AS date,
DATEPART(HOUR,lokaldatotid) as hour,
DATENAME(DW,lokaldatotid) as dag,
pris as spot
FROM [SpotPriser] vp1
WHERE (vp1.boers_id=0)
AND (vp1.omraade_id=0)
AND lokaldatotid >='2012-01-01'
GROUP BY lokaldatotid, pris
ORDER BY lokaldatotid, hour desc") -> spot
When I moved this directly into the script, and deleted the source-line, the scripts would run with the scheduler.
I have no idea why....

Related

How to get via AMI the Pause time of Agent?

I'm making a WebSocket application, and need to get the current Pause Time of an Agent.
When I Call the action: QueueStatus, the return is QueueMember event.
an in JSON is returned something like this:
{ActionID: "WelcomeStatus/7000"
CallsTaken: "0"
Event: "QueueMember"
InCall: "0"
LastCall: "0"
LastPause: "1568301325"
Location: "Agent/7000"
Membership: "dynamic"
Name: "Agent/7000"
Paused: "1"
PausedReason: "Almoço"
Penalty: "0"
Queue: "queue1"
StateInterface: "Agent/7000"
Status: "4"}
Note, is returned "LastPause", "PausedReson" and "Pause"..
In "LastPause", aways show some crazy number (i dont understand that number hahahahah).
Well, how to get the current pause time from Asterisk 15?
--EDIT:
By retesting, I have found that what is causing this is that I am also submitting a Reason for Break.
If I do not send the Reason for break time works normally.
Thanks for u help.
Surfing on asterisk's forum, I found the release:
Bugs fixed in this release:
ASTERISK-27541 - app_queue: Queue paused reason was (big number) secs ago when reason is set (Reported by César Benjamín García Martínez)
But this release is for Asterisk 16, not for Asterisk 15.
I've decided to search this issue in some C files, and i found the fail.
Remember, I have to recompile my asterisk, because I change things straight from the source code.
So if you need to perform this procedure, do it in a test environment before it is passed to the production environment.
Open the file:
/usr/src/asterisk-15.7.3/apps/app_queue.c
And search for this line:
mem->reason_paused, (long) (time(NULL) - mem->lastcall), ast_term_reset());
Change:
mem->reason_paused, (long) (time(NULL) - mem->lastpause), ast_term_reset());
And on this line:
"LastPause", (int)mem->lastpause,
Change to:
"LastPause", (long) (time(NULL) - mem->lastpause),
I think is done... All AMI requests and commands on CLI for me is returning the correct information, and works pretty on my AMI Socket.

Strange pause when python download file from ftp

I have some code, where i check some directories on ftp server and download new files on my server. There are above 3 million files on server (zip archives). I am doing many not optimize things in this code, but all of them works fast, except part with downloading. Here is this part:
lf = open(local_filename, "wb") //here i create blank file
print ("opened")
try:
ftp.retrbinary("RETR "+name, lf.write) //here i write data
print ("wrote")
except ftplib.error_perm:
pass
lf.close() //here i close file with data
print ("closed")
my problem in the part between print ("opened") and print ("wrote"). My python console (2.7) keep silence for 10-20 second on this fase, but size of downloading files is very tiny. Its below 2-3 Kb.
Strange thing in next: when i start script from my own PC (windows 7), it works great and fast, but when i start it on windows server 2012 R2 (VDS), i got this sadly pause. Guys, i need your help. What should i do for configuration my server and fast downloading?
i got the answer. just need to run next command:
netsh int tcp set global ecncapability=disabled
and everything will be excellent!

How to resume reading a file?

I'm trying to find the best and most efficient way to resume reading a file from a given point.
The given file is being written frequently (this is a log file).
This file is rotated on a daily basis.
In the log file I'm looking for a pattern 'slow transaction'. End of such lines have a number into parentheses. I want to have the sum of the numbers.
Example of log line:
Jun 24 2015 10:00:00 slow transaction (5)
Jun 24 2015 10:00:06 slow transaction (1)
This is easy part that I could do with awk command to get total of 6 with above example.
Now my challenge is that I want to get the values from this file on a regular basis. I've an external system that polls a custom OID using SNMP. When hitting this OID the Linux host runs a couple of basic commands.
I want this SNMP polling event to get the number of events since the last polling only. I don't want to have the total every time, just the total of the newly added lines.
Just to mention that only bash can be used, or basic commands such as awk sed tail etc. No perl or advanced programming language.
I hope my description will be clear enough. Apologizes if this is duplicate. I did some researches before posting but did not find something that precisely correspond to my need.
Thank you for any assistance
In addition to the methods in the comment link, you can also simply use dd and stat to read the logfile size, save it and sleep 300 then check the logfile size again. If the filesize has changed, then skip over the old information with dd and read the new information only.
Note: you can add a test to handle the case where the logfile is deleted and then restarted with 0 size (e.g. if $((newsize < size)) then read all.
Here is a short example with 5 minute intervals:
#!/bin/bash
lfn=${1:-/path/to/logfile}
size=$(stat -c "%s" "$lfn") ## save original log size
while :; do
newsize=$(stat -c "%s" "$lfn") ## get new log size
if ((size != newsize)); then ## if change, use new info
## use dd to skip over existing text to new text
newtext=$(dd if="$lfn" bs="$size" skip=1 2>/dev/null)
## process newtext however you need
printf "\nnewtext:\n\n%s\n" "$newtext"
size=$((newsize)); ## update size to newsize
fi
sleep 300
done

Code shows error in cluster but works fine otherwise

Hello everyone,
I have a bash file which has the following code:
./lda --num_topics 15 --alpha 0.1 --beta 0.01 --training_data_file testdata/test_data.txt --model_file Model_Files/lda_model_t15.txt --burn_in_iterations 120 --total_iterations 150
This works perfectly fine normally but when I run it in a cluster it is not loading the data that it is supposed to load from the connected .cc files. I have given #!/bin/bash in the header. What can I do to rectify this situation? Please help!
You will need to mention the full path to the lda executable. Since it's not invoked by you manually, the system will not know where to find the executable if invoked by the shell. Since this is not a shell command, you don't necessarily need the #!/bin/bash even.
/<FullPath>/lda --num_topics 15 --alpha 0.1 --beta 0.01 --training_data_file testdata/test_data.txt --model_file Model_Files/lda_model_t15.txt --burn_in_iterations 120 --total_iterations 150

Query windows event log for the past two weeks

I am trying to export a windows event log but limit the exported events not according to number but according to time the event was logged. I am trying to do that on windows 7 and newer. So far my efforts are focused on using wevtutil.
I am using wevtutil and my command line now is: wevtutil Application events.evtx The problem here is that I export the whole log and this can be quite big so I want to limit it just to the last 2 weeks.
I have found this post but first of all it does not seem to produce any output on my system(yes I have changed the dates and time) and second it seems to be dependent on the date format which I try to avoid.
Here is the modified command I ran:
wevtutil qe Application "/q:*[System[TimeCreated[#SystemTime>='2012-10-02T00:00:00' and #SystemTime<'2012-10-17T00:00:00']]]" /f:text
I had to replace the < and > with the actual symbols as I got a syntax error otherwise. This command produces empty output.
The problem is due to /q: being inside quotes. It should be outside, like:
wevtutil qe Application /q:"*[System[TimeCreated[#SystemTime>='2012-10-02T00:00:00' and #SystemTime<'2012-10-17T00:00:00']]]" /f:text
This works just fine for me.
For the events of the last 2 weeks, you could also use timediff, to avoid hard-coding dates.
Windows uses milliseconds, so it would be 1000 * 86400 (seconds, = 1 day) * 14 (days) = 1209600000.
For your query, that would look like
wevtutil qe Application /q:"*[System[TimeCreated[timediff(#SystemTime) <= 1209600000]]]" /f:text /c:1
I added /c:1 to get only 1 event in the example, since there are many events in the last 2 weeks.
You may also want to only list warning and errors. For that, you can use (Level=2 or Level=3). (For some reason, Level<4 doesn't seem to work for me on Win7)
wevtutil qe Application /q:"*[System[(Level=2 or Level=3) and TimeCreated[timediff(#SystemTime) <= 1209600000]]]" /f:text /c:1
I don't know how you feel about PowerShell, but it's available on all the systems you tagged.
From a powershell prompt, see Get-Help Get-EventLog -Examples for more info.
If you have to do this from a .cmd or .bat file, then you can call powershell.exe -File powershell_script_file_name
where powershell_script_file_name has the Get-EventLog command(s) you need in it.
This example gives all the Security Event Log failures, I use to audit systems:
Get-EventLog -LogName security -newest 1000 | where {$_.entryType -match "Failure"}
I strongly recommend using LogParser for this kind of task:
logparser -i:evt file:query.sql
With query.sql containing something like this:
SELECT
TimeGenerated,EventID,SourceName,Message
FROM Application
WHERE TimeGenerated > TO_TIMESTAMP(SUB(TO_INT(SYSTEM_TIMESTAMP()), 1209600))
ORDER BY TimeGenerated DESC
The somewhat unintuitive date calculation converts the system time (SYSTEM_TIMESTAMP()) to an integer (TO_INT()), subtracts 1209600 seconds (60 * 60 * 24 * 14 = 2 weeks) and converts the result back to a timestamp (TO_TIMESTAMP()), thus producing the date from 2 weeks ago.
You can parameterize the timespan by replacing the fixed number of seconds with MUL(86400, $days) and changing the commandline to this:
logparser -i:evt file:query.sql+days=14
You can also pass the query directly to logparser:
logparser -i:evt "SELECT TimeGenerate,EventID,SourceName,Message FROM ..."

Resources