How can I change the channel_variable like destination_number at run time (dynamically) in Freeswitch? - freeswitch

I am looking for a solution for dynamically changing the channel_variable, destination_number without needing to reloadxml (as it might affect ongoing or incoming call). So basically, FS has to wait till I provide it with appropriate destination_number. Till now, I have been doing it XML way (editing XML files) and then reloadxml command at FS prompt. But that is not viable for my requirement

You can use Lua(or any other freeswitch supported scripting language) script for this. Using Lua you can write custom script with very sophisticated logic.
More details:
https://freeswitch.org/confluence/display/FREESWITCH/Lua+API+Reference

Related

Passing variables from Pre-Process to Post-Process scripts

In a DreamFactory/Bitnami instance I managed to get an Event's Pre-Process script and Post-Process script running. However, there are variables that are generated during the Pre-Process event script that need to be passed to Post-Process script for further processing.
How should I tackle this problem?
I tried to use Payload within Request object, but it is not retained between the scripts. Also after further reading I understand that Payload is not used for this purpose.
The two scripts should have no inherent knowledge of each other. Instead of attempting this method, which should not work without jerry-rigging, I believe your best bet would be to POST the concerned data from the first script to an endpoint that calls the second script.
Try starting with these two articles:
https://community.dreamfactory.com/t/v8js-custom-script-call-another-v8js-custom-script/3236
https://community.dreamfactory.com/t/calling-another-endpoint-from-a-custom-script/3847

Parameterise Parm File name In Informatatica

I want to know how to (or can I) parameterize the parm file name in informatica?
little bit of background. I am building a standard map in informatica. Which business users can call directly after selecting the standard filters they want to apply in the map using a GUI.
The parm file name will be given by business users and all the filters that he/she selected will be in parm. The file will be dropped in the parm folder in informatica server.
This is a good case scenario, when only 1 users is using it at 1 point of time.
Also, I want to find out what should I do when multiple users are working on GUI and generating the parm files and invoking the informatica map. How do I get multiple instences of the same map running at the same time?
I hope I am making sense here....
Thanks!!!
You can achieve this by using concurrent execution of the workflow. Read about it and understand how can you implement it.
Once you know how to implement it, use a backend script/code by the gui to assign an instance name to each call through GUI. For each instance name, you can have an individual parameter file. (I believe that there would be a finite set of combination of variable values in your case). You can use below command to call individual instances, (either through you GUI or by any other backend code.
pmcmd %workflow_name% %informatica_folder_name%
-paramfile %paramfilepathandname% -rin %instance_name%
It might sound a bit confusing, but once you understand how concurrent workflows work, you can build on it based on the above input.
It'll be only possible if you call the Informatica from external tool, not the Client tools. One way is described by #Utsav, the other is when you use Informatica WSH to call a Workflow - you can indicate the parameterfile you want to be used with the workflow, as well as desired instance name.
I Think this guide to concurrent workflows May be what you are looking for:
https://kb.informatica.com/howto/6/Pages/17/301264.aspx

Does SQL*Loader have any functionality that allows for customizing the log file?

I have been asked to create a system for allowing third party companies to dump data into several of our tables. These third parties provide csv files on a periodic basis, and after doing some research it seemed like Oracle themselves had a standard tool for doing so, "sqlldr". I've since gotten it working to an acceptable degree, and we have a job scheduled to run that script once a day.
But one of the third parties supplies really dirty data, of the sort where I can't expect it to always load every row/record (looking like up to about 8% will fail). My boss asked me to forward "all output" from the first few tests to him, and like a moron I also sent the log file.
He has asked that this "report" be modified to include those exceptions that aren't unique constraints along with the line in the input file that caused the exception.
This means that I need data from the log file, but also from the (I believe) reject file in a single document. Rather than write a convoluted shell script to combine those two, does SQL*Loader itself allow any customization that might achieve the same thing? I've read through the Oracle documentation and haven't found anything that suggests this, but I've also learned not to trust it entirely either.
Is this possible? Ideally, the solution would allow me to add values to the reject file that don't exist in the original input file, but I'm also interested in any customization of the log file or reject file.
No.
I was going to stop there, but you can define the name of the log file, which might help with issue. Most automation with SQL*Loader involves wrapping it within shell scripts; aka "roll your own."

Safely execute command (avoid remote execution) with Golang

I have a small app in go that handles http requests by executing a process and providing it with some input from the query string that a user supplied with the request. I was wondering what is the best way to filter that input against remote execution. The PHP alternative for example would be something like:
http://php.net/manual/en/function.escapeshellarg.php
Right now the input should be a valid URL if that makes it easier, but ideally a generic filter would be preferred.
Generally magic functions like that are very hard to get right and often they will leave your application open to attacks if you rely heavily on them.
I would recommend that you use a smart URL/request scheme to get the commands you need to run and put some level of interpretation in between the user request and your shell execution so no parameters given by the user is used directly.
You could get request that contain ?verbose=true and translate them to -v on the command line eg. When dealing with user input like strings that need to be directly given to the command being run you need to do simple escaping with quotes (with a simple check to see if the input contain quotes) to ensure you don't run into a "Bobby Tables" problem.
An alternative way would be to have your program and the underlying command exchange data through pipes or files eg. which would reduce the likeliness of leaving command input an open attack vector.

Is it possible to create an event driven service in shells

Hi I would like to create a small program that listens for copy comands copied content for later retrival in bash. Is it possible to listen to key strokes while still keeping the shell interactive? And how can this be don arcitectualy. I don't need the whole program just a hint at how it can be done. I have no preferance when it comes to language exept that it should be implemented in a scripting language or maby c++.
Pherhaps this needs to be written like a shell extension or somthing. just a hint would be fine.
Consider the way that the script program works (see man script). I havn't done this in a while, but basically you write your pseudo terminal in C and push that into the stream, then launch the shell.
See tcgetattr/tcsetattr, grantpt, unlockpt, and ptsname, with ptem, ldterm and possibly ttcompat to be pushed using ioctl.
A simpler, though less efficient, is to run script into a pipe and capture the output. You probably will need script -f to flush the buffer (I think the -f is only in the GNU version).

Resources