How to get via AMI the Pause time of Agent? - websocket

I'm making a WebSocket application, and need to get the current Pause Time of an Agent.
When I Call the action: QueueStatus, the return is QueueMember event.
an in JSON is returned something like this:
{ActionID: "WelcomeStatus/7000"
CallsTaken: "0"
Event: "QueueMember"
InCall: "0"
LastCall: "0"
LastPause: "1568301325"
Location: "Agent/7000"
Membership: "dynamic"
Name: "Agent/7000"
Paused: "1"
PausedReason: "Almoço"
Penalty: "0"
Queue: "queue1"
StateInterface: "Agent/7000"
Status: "4"}
Note, is returned "LastPause", "PausedReson" and "Pause"..
In "LastPause", aways show some crazy number (i dont understand that number hahahahah).
Well, how to get the current pause time from Asterisk 15?
--EDIT:
By retesting, I have found that what is causing this is that I am also submitting a Reason for Break.
If I do not send the Reason for break time works normally.
Thanks for u help.

Surfing on asterisk's forum, I found the release:
Bugs fixed in this release:
ASTERISK-27541 - app_queue: Queue paused reason was (big number) secs ago when reason is set (Reported by César Benjamín García Martínez)
But this release is for Asterisk 16, not for Asterisk 15.
I've decided to search this issue in some C files, and i found the fail.
Remember, I have to recompile my asterisk, because I change things straight from the source code.
So if you need to perform this procedure, do it in a test environment before it is passed to the production environment.
Open the file:
/usr/src/asterisk-15.7.3/apps/app_queue.c
And search for this line:
mem->reason_paused, (long) (time(NULL) - mem->lastcall), ast_term_reset());
Change:
mem->reason_paused, (long) (time(NULL) - mem->lastpause), ast_term_reset());
And on this line:
"LastPause", (int)mem->lastpause,
Change to:
"LastPause", (long) (time(NULL) - mem->lastpause),
I think is done... All AMI requests and commands on CLI for me is returning the correct information, and works pretty on my AMI Socket.

Related

ISDeploymentWizard.exe command (SSIS deployment ) in CMD doesn't print any indication for status

I'm running the below command in CMD for SSIS:
ISDeploymentWizard.exe /Silent /ModelType:Project /SourcePath:"C:\TEST\Integration Services.ispac" /DestinationServer:"TEST03,1111" /DestinationPath:"/TEST/DEVOPS"
and it finished successfully but with no indication to the command line. I can only check with SSMS to make sure it was really deployed. any idea why?
Solid observation here #areilma - the /silent option eliminates all status info. I had always assumed that flag controlled whether the gui was displayed or not.
If I run this command
isdeploymentwizard.exe /Silent /ModelType:Project /SourcePath:".\SO_66497856.ispac" /DestinationServer:".\dev2017" /DestinationPath:"/SSISDB/BatchSizeTester/SO_66497856"
My package is deployed to my local machine at the path specified. Removing the /silent option causes the GUI to open up with the prepopulated values.
isdeploymentwizard.exe /ModelType:Project /SourcePath:".\SO_66497856.ispac" /DestinationServer:".\dev2017" /DestinationPath:"/SSISDB/BatchSizeTester/SO_66497856"
When the former command runs, nothing is printed to the command prompt. So that's happy path deployment, maybe if something is "wrong", I'd get an error message on the command line. And this is where things got "interesting".
I altered my destination path to a folder that doesn't exist. I know the tool doesn't create a path if it doesn't exist and when I ran it, I didn't get an error back on the command line. What I did get, was a pop up windowed error of
TITLE: SQL Server Integration Services
The path does not exist. The folder 'cBatchSizeTester' was not found in catalog 'SSISDB'. (Microsoft.SqlServer.IntegrationServices.Wizard.Common)
BUTTONS:
OK
So the /silent option removes the gui to allow us to have an automated deploy but if a bad value is passed, we return to having a gui... I then repeated with a bad server name, which led to a second observation. The second I hit enter, the command line returned ready for the next command. 15 seconds later however,
TITLE: SQL Server Integration Services
Failed to connect to server .\dev2017a. (Microsoft.SqlServer.ConnectionInfo)
ADDITIONAL INFORMATION:
A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections. (provider: SQL Network Interfaces, error: 26 - Error Locating Server/Instance Specified) (Microsoft SQL Server, Error: -1)
For help, click: http://go.microsoft.com/fwlink?ProdName=Microsoft%20SQL%20Server&EvtSrc=MSSQLServer&EvtID=-1&LinkId=20476
Well now, that tells me that the actual deployment is an independent spawned process. So it won't return any data back to the command line, in any case.
Since I assume we're looking at this from a CI/CD perspective, what can we do? We could fire off a sqlcmd afterwards looking for an entry in the SSISDB catalog views to see what happened. Something like this
SELECT TOP 1 O.end_time, SV.StatusValue, F.name AS FolderName, P.name AS ProjectName FROM catalog.operations AS O
CROSS APPLY
(
SELECT
CASE O.status
WHEN 1 THEN 'Created'
WHEN 2 THEN 'Running'
WHEN 3 THEN 'Canceled'
WHEN 4 THEN 'Failed'
WHEN 5 THEN 'Pending'
WHEN 6 THEN 'Ended unexpectedly'
WHEN 7 THEN 'Succeeded'
WHEN 8 THEN 'Stopping'
WHEN 9 THEN 'Completed'
END AS StatusValue
)SV
INNER JOIN catalog.object_versions AS OV ON OV.object_id = O.object_id
INNER JOIN catalog.projects AS P ON P.object_version_lsn = OV.object_version_lsn
INNER JOIN catalog.folders AS F ON F.folder_id = P.folder_id
/*
INNER JOIN
catalog.packages AS PKG
ON PKG.project_id = P.project_id
*/
WHERE O.operation_type = 101 /*deploy project*/
AND P.name = 'SO_66497856' /*project name*/
AND F.name = 'BatchSizeTester'
ORDER BY o.created_time DESC
Perhaps a filter against end_time of within the past 10 seconds would be appropriate and if we have a result and the status is Succeeded we got a deploy. No result means it failed. I presume something similar happens when the gui runs and despite all this testing, I'm not interested in firing up a trace to fully round out this answer and see what happens behind the scenes.
If you want to negate the value of the prebuilt tool, the other option would be to use the ManagedObjectModel/PowerShell approach to deploy as you can get info from there. The other deployment option is with the TSQL Commands. The second link in my documentation section outlines what that would look like
Paltry documentation I could find
I could find no documentation as to the command line switches for isdeploymentwizard.exe
Deploy an SSIS project from the command prompt with ISDeploymentWizard.exe
Deploy Integration Services (SSIS) Projects and Packages
From #arielma's deleted answer, they found a more succinct answer saying "not possible"

Issue with Set-CMTaskSequenceDeployment

Seems that New-CMTaskSequenceDeployment / Set-CMTaskSequenceDeployment cmdlet option -DeploymentOption does not work as expected.
I'm trying to automate a Task Sequence Deployment via Powershell. I use New-CMTaskSequenceDeployment cmdlet to create the deployment. The content of the TS should be downloaded before the start of the TS.
Works ok, but the -DeploymentOption DownloadAllContentLocallyBeforeStartingTaskSequence seems not to have any effect, when I check the deployment after the script ran, the option "pre-download content for this task sequence" isn't checked.
Same issue when I try Set-CMTaskSequenceDeployment.
Any hint from the community what I'm doing wrong?
...
# Create deployment for all waves now
foreach ($StrCollectionName in $ArrCollectionName)
{
$SchedulePhase2 = New-CMSchedule -Nonrecurring -Start $DateScheduleStartPhase2
Try {
$Deployment = New-CMTaskSequenceDeployment -CollectionName $StrCollectionName -TaskSequencePackageId $StrTaskSequenceID -DeployPurpose Required -AvailableDateTime $DateAvailablePhase1 -DeploymentOption DownloadAllContentLocallyBeforeStartingTaskSequence -SoftwareInstallation $False -SystemRestart $False -Schedule $SchedulePhase2 -RerunBehavior RerunIfFailedPreviousAttempt -AllowUsersRunIndependently $True -SendWakeUpPacket $True
Write-Host "Success - Deployment $Deployment created!"
}
Catch {
Write-Host "Error - Exception caught in creating deployment : $error[0]"
Exit
}
}
...
Looks like unfortunately (and unexpected) the pre-download behavior is different for package/program deployment and task sequence deployment.
In case of a package/program deployment the content download will start after start time in case the deployment has a mandatory time defined.
A TS deployment behaves different. It will start download when mandatory time (schedule) has been reached. Start time will be ignored.
This difference is independently from how the deployment has been started (Console or powershell cmdlet) therefore it is not an issue of the cmdlet.
First of all, you can check the picture below to make sure not to confuse these two options.
Difference between Preload content checkbox and Download all content locally before starting TS
Once done Here is my proposition :
Just by clicking, try to retrieve the property of you TSDeployment before and after you clicked the checkbox. You will see that one property changed. AdvertFlags
PS MUZ:\> (Get-CMTaskSequenceDeployment -DeploymentID MUZ200C5).AdvertFlags
[Convert]::ToString((Get-CMTaskSequenceDeployment -DeploymentID MUZ200C5).AdvertFlags,2)
Output :
34275328
10000010110000000000000000
From there, you can read from the MS doc : https://learn.microsoft.com/en-us/configmgr/develop/reference/core/servers/configure/sms_advertisement-server-wmi-class
From this, I learn that I need to change the 12th bit like this :
$advertflag = Get-CMTaskSequenceDeployment -DeploymentID MUZ200C5
$advertflag.AdvertFlags = $advertflag.AdvertFlags -bor "0x00001000"
$advertflag.put()
I hope it will help someone someday :)

Bash commands putting out extra information which results into issues with scripts

Okay, hopefully I can explain this correctly as I have no idea what's causing this or how to resolve this.
For some reason bash commands (on a CentOS 6.x server) are displaying more information than "normally" and that causes issues with certain scripts. I have no clue if there is a name for this, but hopefully someone knows a solution for this.
First example.
Correct / good server:
[root#goodserver ~]# vzctl enter 3567
entered into CT 3567
[root#example /]#
(this is the correct behaviour)
Incorrect / bad server:
[root#badserver /]# vzctl enter 3127
Entering CT
entered into CT 3127
Open /dev/pts/0
[root#example /]#
With the "bad" server it will display more information as usual, like:
Entering CT
Open /dev/pts/0
It's like it parsing extra information on what it's doing.
Ofcourse the above is purely something cosmetic, however with several bash scripts we use, these issues are really issues.
A part of the script we use, uses the following command (there are more, but this is mainly a example of what's wrong):
DOMAIN=`vzctl exec $VEID 'hostname -d'`
The result of the above information is parsed in /etc/named.conf.
On the GOOD server it would be added in the named.conf like this:
zone "example.com" {
type master;
file "example.com";
allow-transfer {
200.190.100.10;
200.190.101.10;
common-allow-transfer;
};
};
The above is correct.
On the BAD server it would be added in the named.conf like this:
zone "Executing command: hostname -d
example.com" {
type master;
file "Executing command: hostname -d
example.com";
allow-transfer {
200.190.100.10;
200.190.101.10;
common-allow-transfer;
};
};
So it's add stuff of the action it does, in this example "Executing command: hostname -d"
Another example here when I run the command on a good server and on the bad server.
Bad server:
[root#bad-server /]# DOMAIN=`vzctl exec 3333 'hostname -d'`
[root#bad-server /]# echo $DOMAIN
Executing command: hostname -d example.com
Good server:
[root#good-server ~]# DOMAIN=`vzctl exec 4444 'hostname -d'`
[root#good-server ~]# echo $DOMAIN
example.com
My knowledge is limited, but I have tried several things checking rsyslog and the grub.conf, but nothing seems out of the ordinary.
I have no clue why it's displaying the extra information.
Probably it's something simple / stupid, but I have been trying to solve this for hours now and I really have no clue...
So any help is really appreciated.
Added information:
Both servers use: kernel.printk = 7 4 1 7
(I don't know if that's useful)
Well (thanks to Aaron for pointing me in the right direction) I finally found the little culprit which was causing all the issues I experienced with this script (which worked for every other server, so no need to change that obviously).
The issues were caused by the VERBOSE leven set in vz.conf (located in /etc/vz/ directory). There is an option in there called "VERBOSE" and in my case it was set to 3.
According to OpenVZ's website it does the following:
Increments logging level up from the default. Can be used multiple times.
Default value is set to the value of VERBOSE parameter in the global
configuration file vz.conf(5), or to 0 if not set by VERBOSE parameter.
After I changed VERBOSE=3 to VERBOSE=0 my script worked fine once again (as it did for every other server). :-)
So a big shoutout to Aaron for pointing me in the right direction. The answer is easy when you know where to look!
Sorry to say, but I am kinda disappointed by ndim's reaction. This is the 2nd time he was very unhelpful and rude in his response after that. He clearly didn't read the issue I posted correctly. Oh well.
I would make sure to properly parse the output of the command. In this case, we are only interested in lines of the form
entered into CT 12345
One way of doing this would be to pipe everything through sed and having sed print only the number when the line looks as above (untested, and I always forget which braces/brackets/parens need a backslash in front of them):
whateverthecommand | sed -n 's/^entered into CT ([0-9]{1,})$/\1/p'

Random corruption in file created/updated from shell script on a singular client to NFS mount

We have bash script (job wrapper) that writes to a file, launches a job, then at job completion it appends to the file information about the job. The wrapper is run on one of several thousand batch nodes, but has only cropped up with several batch machines (I believe RHEL6) accessing one NFS server, and at least one known instance of a different batch job on a different batch node using a different NFS server. In all cases, only one client host is writing to the files in question. Some jobs take hours to run, others take minutes.
In the same time period that this has occurred, there seems to be 10-50 issues out of 100,000+ jobs.
Here is what I believe to effectively be the (distilled) version of the job wrapper:
#!/bin/bash
## cwd is /nfs/path/to/jobwd
## This file is /nfs/path/to/jobwd/job_wrapper
gotEXIT()
{
## end of script, however gotEXIT is called because we trap EXIT
END="EndTime: `date`\nStatus: Ended”
echo -e "${END}" >> job_info
cat job_info | sendmail jobtracker#example.com
}
trap gotEXIT EXIT
function jobSetVar { echo "job.$1: $2" >> job_info; }
export -f jobSetVar
MSG=“${email_metadata}\n${job_metadata}”
echo -e "${MSG}\nStatus: Started" | sendmail jobtracker#example.com
echo -e "${MSG}" > job_info
## At the job’s end, the output from `time` command is the first non-corrupt data in job_info
/usr/bin/time -f "Elapsed: %e\nUser: %U\nSystem: %S" -a -o job_info job_command
## 10-360 minutes later…
RC=$?
echo -e "ExitCode: ${RC}" >> job_info
So I think there are two possibilities:
echo -e "${MSG}" > job_info
This command throws out corrupt data.
/usr/bin/time -f "Elapsed: %e\nUser: %U\nSystem: %S" -a -o job_info job_command
This corrupts the existing data, then outputs it’s data correctly.
However, some job, but not all, call jobSetVar, which doesn't end up being corrupt.
So, I dig into time.c (from GNU time 1.7) to see when the file is open. To summarize, time.c is effectively this:
FILE *outfp;
void main (int argc, char** argv) {
const char **command_line;
RESUSE res;
/* internally, getargs opens “job_info”, so outfp = fopen ("job_info", "a”) */
command_line = getargs (argc, argv);
/* run_command doesn't care about outfp */
run_command (command_line, &res);
/* internally, summarize calls fprintf and putc on outfp FILE pointer */
summarize (outfp, output_format, command_line, &res); /
fflush (outfp);
}
So, time has FILE *outfp (job_info handle) open the entire time of the job. It then writes the summary at the end of the job, and then doesn’t actually appear to close the file (not sure if this is necessary with fflush?) I've no clue if bash also has the file handle open concurrently as well.
EDIT:
A corrupted file will typically end consist of the corrupted part, followed with the non-corrupted part, which may look like this:
The corrupted section, which would occur before the non-corrupted section, is typically largely a bunch of 0x0000, with maybe some cyclic garbage mixed in:
Here's an example hexdump:
40000000 00000000 00000000 00000000
00000000 00000000 C8B450AC 772B0000
01000000 00000000 C8B450AC 772B0000
[ 361 x 0x00]
Then, at the 409th byte, it continues with the non-corrupted section:
Elapsed: 879.07
User: 0.71
System: 31.49
ExitCode: 0
EndTime: Fri Dec 6 15:29:27 PST 2013
Status: Ended
Another file looks like this:
01000000 04000000 805443FC 9D2B0000 E04144FC 9D2B0000 E04144FC 9D2B0000
[96 x 0x00]
[Repeat above 3 times ]
01000000 04000000 805443FC 9D2B0000 E04144FC 9D2B0000 E04144FC 9D2B0000
Followed by the non corrupted section:
Elapsed: 12621.27
User: 12472.32
System: 40.37
ExitCode: 0
EndTime: Thu Nov 14 08:01:14 PST 2013
Status: Ended
There are other files that have much more random corruption sections, but more than a few were cyclical similar to above.
EDIT 2: The first email, sent from the echo -e statement goes through fine. The last email is never sent due to no email metadata from corruption. So MSG isn't corrupted at that point. It's assumed that job_info probably isn't corrupt at that point either, but we haven't been able to verify that yet. This is a production system which hasn't had major code modifications and I have verified through audit that no jobs have been ran concurrently which would touch this file. The problem seems to be somewhat recent (last 2 months), but it's possible it's happened before and slipped through. This error does prevent reporting which means jobs are considered failed, so they are typically resubmitted, but one user in specific has ~9 hour jobs in which this error is particularly frustrating. I wish I could come up with more info or a way of reproducing this at will, but I was hoping somebody has maybe seen a similar problem, especially recently. I don't manage the NFS servers, but I'll try to talk to the admins to see what updates the NFS servers at the time of these issues (RHEL6 I believe) were running.
Well, the emails corresponding to the corrupt job_info files should tell you what was in MSG (which will probably be business as usual). You may want to check how NFS is being run: there's a remote possibility that you are running NFS over UDP without checksums. That could explain some corrupt data. I also hear that UDP/TCP checksums are not strong enough and the data can still end up corrupt -- maybe you are hitting such a problem (I have seen corrupt packets slipping through a network stack at least once before, and I'm quite sure some checksumming was going on). Presumably the MSG goes out as a single packet and there might be something about it that makes checksum conflicts with the garbage you see more likely. Of course it could also be an NFS bug (client or server), a server-side filesystem bug, busted piece of RAM... possibilities are almost endless here (although I see how the fact that it's always MSG that gets corrupted makes some of those quite unlikely). The problem might be related to seeking (which happens during the append). You could also have a bug somewhere else in the system, causing multiple clients to open the same job_info file, making it a jumble.
You can also try using different file for 'time' output and then merge them together with job_info at the end of script. That may help to isolate problem further.
Shell opens 'job_info' file for writing, outputs MSG and then shall close its file descriptor before launching main job. 'time' program opens same file for append as stream and I suspect the seek over NFS is not done correctly which may cause that garbage. Can't explain why, but normally this shall not happen (and is not happening). Such rare occasions may point to some race condition somewhere, can be caused by out of sequence packet delivery (due to network latency spike) or retransmits which causes duplicate data, or a bug somewhere. At first look I would suspect some bug, but that bug may be triggered by some network behavior, e.g. unusually large delay or spike of packet loss.
File access between different processes are serialized by kernel, but for additional safeguard may be worth adding some artificial delays - sleep timers between outputs for example.
Network is not transparent, especially a large one. There can be WAN optimization devices which are known to cause application issues sometimes. CIFS and NFS are good candidates for optimization over WAN with local caching of filesystem operations. Might be worth looking for recent changes with network admins..
Another thing to try, although can be difficult due to rare occurrences is capture of interesting NFS sessions via tcpdump or wireshark. In really tough cases we do simultaneous capturing on both client and server side and then compare the protocol logic to prove that network is or is not working correctly. That's a whole topic in itself, requires thorough preparation and luck but usually a last resort of desperate troubleshooting :)
It turns out this was actually another issue altogether, apparently to do with an out-of-date page being written to disk.
A bug fix was supplied to the linux-nfs implementation:
http://www.spinics.net/lists/linux-nfs/msg41357.html

Automating scripts, Rscripts

Im having a lot of trouble of automating my .R files, and im having trouble understand the information regarding it. But here goes:
Im using windows 7 and simply want to automatically run a R.script every morning at 08.00. The .R file spits out the output by itself, so I dont want a separate output-file. I've created a bat-file like this:
"C:\R\R-3.0.1\bin\x64\Rscript.exe" "C:\R\R-3.0.1\bin\x64\Scripts\24AR_v1bat.R"
Echo %DATE% %TIME% %ERRORLEVEL% >> C:\R\R-3.0.1\bin\x64\scripts\24AR_v1.txt
When I run this manually, it works perfectly. Both with/without the:
--default-packages=list
When I run it through the cmd-window, it works perfectly. Yet when I try to run it through the task-scheduler it runs, but does not work. (I either get a 1 or 2 error in my error-message file).
I've looked at R Introduction - Invoking R from the command line, and help(Rscript) but I still can't manage to get it to work.
NEW EDIT: I found that not doing the MS SQL-call, will let my code run from the scheduler. Not sure if I should make a new question or?
EDIT: Adding the R-script
# 24 Hour AR-model, v1 ----------------------------------------------------
#Remove all variables from the workspace
#rm(list=ls())
# Loading Packages
library(forecast)
#Get spot-prices System from 2012-01-01 to today
source("/location/Scripts/SQL_hourlyprices.R")
sys <- data.frame()
sys <- spot
rm(spot)
# Ordering the data, first making a matrix with names: SYS
colnames(sys) <- c("date","hour","day","spot")
hour <-factor(sys[,2])
day <-factor(sys[,3])
dt<-sys[,1]
dt<-as.Date(dt)
x<-sys[,4]
q <-ts(x, frequency=24)
x0<- q[hour==0]
x1<- q[hour==1]
x0 <-ts(x0, frequency=7)
x1 <-ts(x1, frequency=7)
# ARIMA MODELS
y0<-Arima(x0,order=c(2,1,0))
y1<-Arima(x1,order=c(2,1,1))
fr0 <- forecast.Arima(y0,h=1)
fr1 <- forecast.Arima(y1,h=1)
h1<-as.numeric(fr0$mean)
h2<-as.numeric(fr1$mean)
day1 <-Sys.Date()+1
atable<-data.frame
runtime<-Sys.time()
atable<-cbind(runtime,day1,h1,h2)
options(digits=4)
write.table(atable, file="//location/24ar_v1.csv",
append=TRUE,quote=FALSE, sep=",", row.names=F, col.names=F)
But as I said, I can manually run the code with the batch-file and have it work perfectly, yet with the scheduler it won't work.
After hours of trying everything, it seems the problem was that I had:
source("/location/Scripts/SQL_hourlyprices.R")
Where I simply had a SQL-call inside:
sqlQuery(dbdata2, "SELECT CONVERT(char(10), [lokaldatotid],126) AS date,
DATEPART(HOUR,lokaldatotid) as hour,
DATENAME(DW,lokaldatotid) as dag,
pris as spot
FROM [SpotPriser] vp1
WHERE (vp1.boers_id=0)
AND (vp1.omraade_id=0)
AND lokaldatotid >='2012-01-01'
GROUP BY lokaldatotid, pris
ORDER BY lokaldatotid, hour desc") -> spot
When I moved this directly into the script, and deleted the source-line, the scripts would run with the scheduler.
I have no idea why....

Resources