I'm a newby on FS,
I need to generate a call from the outside of FS.
I'll use ESL but at the moment, for test it's enough to do it from the cli so to start to understand. I leave an example as below:
I'd like to call a PSTN number (i.e. 0112345678) and bridge it with one of the extension of my PBX (i.e. 101#mydomain.com). My caller id is 01156789012 so I prepared this string
originate {origination_caller_id_number=0112345678}sofia/default/101#mydomain.com 0112345678 xml default myname 01156789012
Tested and even with some changes, nothing happen, where I'm wrong ?
Thanks for your help
EC
Related
I want to know how to (or can I) parameterize the parm file name in informatica?
little bit of background. I am building a standard map in informatica. Which business users can call directly after selecting the standard filters they want to apply in the map using a GUI.
The parm file name will be given by business users and all the filters that he/she selected will be in parm. The file will be dropped in the parm folder in informatica server.
This is a good case scenario, when only 1 users is using it at 1 point of time.
Also, I want to find out what should I do when multiple users are working on GUI and generating the parm files and invoking the informatica map. How do I get multiple instences of the same map running at the same time?
I hope I am making sense here....
Thanks!!!
You can achieve this by using concurrent execution of the workflow. Read about it and understand how can you implement it.
Once you know how to implement it, use a backend script/code by the gui to assign an instance name to each call through GUI. For each instance name, you can have an individual parameter file. (I believe that there would be a finite set of combination of variable values in your case). You can use below command to call individual instances, (either through you GUI or by any other backend code.
pmcmd %workflow_name% %informatica_folder_name%
-paramfile %paramfilepathandname% -rin %instance_name%
It might sound a bit confusing, but once you understand how concurrent workflows work, you can build on it based on the above input.
It'll be only possible if you call the Informatica from external tool, not the Client tools. One way is described by #Utsav, the other is when you use Informatica WSH to call a Workflow - you can indicate the parameterfile you want to be used with the workflow, as well as desired instance name.
I Think this guide to concurrent workflows May be what you are looking for:
https://kb.informatica.com/howto/6/Pages/17/301264.aspx
I am currently doing the Ruby on the Web project for The Odin Project. The goal is to implement a very basic webserver that parses and responds to GET or POST requests.
My solution uses IO#gets and IO#read(maxlen) together with the Content-Length Header attribute to do the parsing.
Other solution use IO#read_nonblock. I googled for it, but was quite confused with the documentation for it. It's often mentioned together with Kernel#select, which didn't really help either.
Can someone explain to me what the nonblock calls do differently than the normal ones, how they avoid blocking the thread of execution, and how they play together with the Kernel#select method?
explain to me what the nonblock calls do differently than the normal ones
The crucial difference in behavior is when there is no data available to read at call time, but not at EOF:
read_nonblock() raises an exception kind of IO::WaitReadable
normal read(length) waits until length bytes are read (or EOF)
how they avoid blocking the thread of execution
According to the documentation, #read_nonblock is using the read(2) system call after O_NONBLOCK is set for the underlying file descriptor.
how they play together with the Kernel#select method?
There's also IO.select. We can use it in this case to wait for availability of input data, so that a subsequent read_nonblock() won't cause an error. This is especially useful if there are multiple input streams, where it is not known from which stream data will arrive next and for which read() would have to be called.
In a blocking write you wait until bytes got written to a file, on the other hand a nonblocking write exits immediately. It means, that you can continue to execute your program, while operating system asynchronously writes data to a file. Then, when you want to write again, you use select to see whether the file is ready to accept next write.
What is the procedure to define new external collectors in bosun using scollector.
Can we write python or shell scripts to collect data?
The documentation around this is not quite up to date. You can do it as described in http://godoc.org/bosun.org/cmd/scollector#hdr-External_Collectors , but we also support JSON output which is better.
Either way, you write something and put it in the external collectors directory, followed by a frequency directory, and then an executable script or binary. Something like:
<external_collectors_dir>/<freq_sec>/foo.sh.
If the directory frequency is zero 0, then the the script is expected to be continuously running, and you put a sleep inside the code (This is my preferred method for external collectors). The scripts outputs the telnet format, or the undocumented JSON format to stdout. Scollector picks it up, and queues that information for sending.
I created an issue to get this documented not long ago https://github.com/bosun-monitor/bosun/issues/1225. Until one of us gets around to that, here is the PR that added JSON https://github.com/bosun-monitor/bosun/commit/fced1642fd260bf6afa8cba169d84c60f2e23e92
Adding to what Kyle said, you can take a look at some existing external collectors to see what they output. here is one written in java that one of our colleagues wrote to monitor jvm stuff. It uses the text format, which is simply:
metricname timestamp value tag1=foo tag2=bar
If you want to use the JSON format, here is an example from one of our collectors:
{"metric":"exceptional.exceptions.count","timestamp":1438788720,"value":0,"tags":{"application":"AdServer","machine":"ny-web03","source":"NY_Status"}}
And you can also send metadata:
{"Metric":"exceptional.exceptions.count","Name":"rate","Value":"counter"}
{"Metric":"exceptional.exceptions.count","Name":"unit","Value":"errors"}
{"Metric":"exceptional.exceptions.count","Name":"desc","Value":"The number of exceptions thrown per second by applications and machines. Data is queried from multiple sources. See status instances for details on exceptions."}`
Or send error messages to stderror:
2015/08/05 15:32:00 lookup OR-SQL03: no such host
I was wondering whether it is possible to look inside the input stream
when the "chrome://" protocol is used in Firefox. Let me be more
clear. Let's take the following call sequence, for example:
nsXULDocument.cpp has a nsXULDocument::ResumeWalk() method.
It calls LoadScript() [around line:3004].
LoadScript() calls NS_NewStreamLoader [nsXULDocument.cpp, line
3440].
NS_NewStreamLoader calls NS_NewChannel() [nsNetUtil.h, line:593].
NS_NewChannel() then calls ioservice->NewChannelFromURI()
[nsNetUtil.h, line:226].
NewChannelFromURI() calls NewChannelFromURIWithProxyFlags()
[nsIOService.cpp line:596].
NewChannelFromURIWithProxyFlags() calls handler->newChannel() which
is resolved at runtime to become nsChromeProtocolHandler->newChannel()
[nsChromeProtocolHandler.cpp, line:182].
This in turn calls ioServ->NewChannelFromURI()
[nsChromeProtocolHandler.cpp, line:196].
Step 6 is repeated.
Step 7 is repeated, however, at different times, it can load
different handlers based on the protocol (chrome, jar, file etc.)
My intention for describing the above call sequence was to set up the
context for my problem. I want to know when "chrome://" protocol is
used, and when it is used, I want to process the input stream. For
example, if Firefox is loading a script like "chrome://package/content/
script.js" I want to intercept the file when it is accessed from the
disk. After intercepting the file, I might change it's contents, or
dump the file's content in a folder of my choice.
So, whenever Firefox reads a file (using a method like fread(),
probably, I would like to know that as well), I would like to determine whether the read request was from the chrome protocol or not, and at that moment I can make some changes to the file based on my needs. Any helps regarding this?
For those who've stumbled here curious about the 'chrome' protocol, here are some references that may be handy:
- Chrome Protocol, part of SPDY
- Lets make the web faster project
On win32, using winapi, is there anyway to know which comports (from com0 upwards) actually exist as devices?
At the moment I am just attemping to open them all (0 to 9), but I can't figure out the difference of failure between one not existing, and one not simply being available for use because someone else is using it. Both situations seem to return the same last error, so I was wondering if I could list all the comports available on the system.
I believe you can call QueryDosDevice() and pass null for the first parameter and then parse the results.
Search google for "enumerate com ports". This is an example link.
The name is unfortunate, but "SetupAPI" is the relevant part of the Windows API. Call SetupDiCreateDeviceInfoList once for device interface class GUID_DEVINTERFACE_COMPORT. Then call SetupDiEnumDeviceInfo repeatedly, starting at index 0 until GetLastError()==ERROR_NO_MORE_ITEMS.