Safely execute command (avoid remote execution) with Golang - shell

I have a small app in go that handles http requests by executing a process and providing it with some input from the query string that a user supplied with the request. I was wondering what is the best way to filter that input against remote execution. The PHP alternative for example would be something like:
http://php.net/manual/en/function.escapeshellarg.php
Right now the input should be a valid URL if that makes it easier, but ideally a generic filter would be preferred.

Generally magic functions like that are very hard to get right and often they will leave your application open to attacks if you rely heavily on them.
I would recommend that you use a smart URL/request scheme to get the commands you need to run and put some level of interpretation in between the user request and your shell execution so no parameters given by the user is used directly.
You could get request that contain ?verbose=true and translate them to -v on the command line eg. When dealing with user input like strings that need to be directly given to the command being run you need to do simple escaping with quotes (with a simple check to see if the input contain quotes) to ensure you don't run into a "Bobby Tables" problem.
An alternative way would be to have your program and the underlying command exchange data through pipes or files eg. which would reduce the likeliness of leaving command input an open attack vector.

Related

Passing variables from Pre-Process to Post-Process scripts

In a DreamFactory/Bitnami instance I managed to get an Event's Pre-Process script and Post-Process script running. However, there are variables that are generated during the Pre-Process event script that need to be passed to Post-Process script for further processing.
How should I tackle this problem?
I tried to use Payload within Request object, but it is not retained between the scripts. Also after further reading I understand that Payload is not used for this purpose.
The two scripts should have no inherent knowledge of each other. Instead of attempting this method, which should not work without jerry-rigging, I believe your best bet would be to POST the concerned data from the first script to an endpoint that calls the second script.
Try starting with these two articles:
https://community.dreamfactory.com/t/v8js-custom-script-call-another-v8js-custom-script/3236
https://community.dreamfactory.com/t/calling-another-endpoint-from-a-custom-script/3847

Parameterise Parm File name In Informatatica

I want to know how to (or can I) parameterize the parm file name in informatica?
little bit of background. I am building a standard map in informatica. Which business users can call directly after selecting the standard filters they want to apply in the map using a GUI.
The parm file name will be given by business users and all the filters that he/she selected will be in parm. The file will be dropped in the parm folder in informatica server.
This is a good case scenario, when only 1 users is using it at 1 point of time.
Also, I want to find out what should I do when multiple users are working on GUI and generating the parm files and invoking the informatica map. How do I get multiple instences of the same map running at the same time?
I hope I am making sense here....
Thanks!!!
You can achieve this by using concurrent execution of the workflow. Read about it and understand how can you implement it.
Once you know how to implement it, use a backend script/code by the gui to assign an instance name to each call through GUI. For each instance name, you can have an individual parameter file. (I believe that there would be a finite set of combination of variable values in your case). You can use below command to call individual instances, (either through you GUI or by any other backend code.
pmcmd %workflow_name% %informatica_folder_name%
-paramfile %paramfilepathandname% -rin %instance_name%
It might sound a bit confusing, but once you understand how concurrent workflows work, you can build on it based on the above input.
It'll be only possible if you call the Informatica from external tool, not the Client tools. One way is described by #Utsav, the other is when you use Informatica WSH to call a Workflow - you can indicate the parameterfile you want to be used with the workflow, as well as desired instance name.
I Think this guide to concurrent workflows May be what you are looking for:
https://kb.informatica.com/howto/6/Pages/17/301264.aspx

redis: EVAL and the TIME

I like the Lua-scripting for redis but i have a big problem with TIME.
I store events in a SortedSet.
The score is the time, so that in my application i can view all events in given time-window.
redis.call('zadd', myEventsSet, TIME, EventID);
Ok, but this is not working - i can not access the TIME (Servertime).
Is there any way to get a time from the Server without passing it as an argument to my lua-script? Or is passing the time as argument the best way to do it?
This is explicitly forbidden (as far as I remember). The reasoning behind this is that your lua functions must be deterministic and depend only on their arguments. What if this Lua call gets replicated to a slave with different system time?
Edit (by Linus G Thiel): This is correct. From the redis EVAL docs:
Scripts as pure functions
A very important part of scripting is writing scripts that are pure functions. Scripts executed in a Redis instance are replicated on slaves by sending the script -- not the resulting commands.
[...]
In order to enforce this behavior in scripts Redis does the following:
Lua does not export commands to access the system time or other external state.
Redis will block the script with an error if a script calls a Redis command able to alter the data set after a Redis random command like RANDOMKEY, SRANDMEMBER, TIME. This means that if a script is read-only and does not modify the data set it is free to call those commands. Note that a random command does not necessarily mean a command that uses random numbers: any non-deterministic command is considered a random command (the best example in this regard is the TIME command).
There is a wealth of information on why this is, how to deal with this in different scenarios, and what Lua libraries are available to scripts. I recommend you read the whole documentation!

How do I write to the console regardless of whether or not the user uses `|` or `>` operators on the shell?

I wrote a program in Ruby and have been writing all data to the console with puts.
If I run my.rb from the console I can redirect the stream to a file both with > and |.
How should I change stdout in order for data to be written to the Windows console?
The whole idea of those symbols is to allow users of your app to redirect the output to their desired place. This is useful for logging, filtering, and any number of other applications where you want the output of one program to be the input of another program or file. It's facilitates a form of inter-process communication that often isn't possible otherwise.
Essentially, unless you can more clearly define the reason you want to do this (a specific case where this is useful and desirable) you should not try to do this, nor am I sure it's even possible, because those symbols operate at the shell level. I don't think there is anything within the scope of Ruby that you can do what will have any effect whatsoever on where the output goes. The shell (after it's already left your Ruby program) is capturing that output and redirecting it. By that point, that data/output is already out of the control of your Ruby app.
If you are trying to differentiate from real "output" and error messages that the user should see, you can instead send output to the standard error output with something like:
$stderr << 'oh noes!'
The standard error output is redirected independently from the standard output.

2-way communication with background process (I/O)

I have a program that runs in the command line (i.e. $ run program starts up a prompt) that runs mathematical calculations. It has it's own prompt that takes in text input and responds back through standard-out/error (or creates a separate x-window if needed, but this can be disabled). Sometimes I would like to send it small input, and other times I send in a large text file filled with a series of input on each line. This program takes a lot of resources and also has a large startup time, so it would be best to only have one instance of it running at a time. I could keep open the program-prompt and supply the input this way, or I can send the process with an exit command (to leave prompt) which just prints the output. The problem with sending the request with an exit command is that the program must startup each time (slow ...). Furthermore, the output of this program is sometimes cryptic and it would be helpful to filter the output in some way (eg. simplify output, apply ANSI colors, etc).
This all makes me want to put some 2-way IO filter (or is that "pipe"? or "wrapper"?) around the program so that the program can run in the background as single process. I would then communicate with it without having to restart. I would also like to have this all while filtering the output to be more user friendly. I have been looking all over for ideas and I am stumped at how to accomplish this in some simple shell accessible manor.
Some things I have tried were redirecting stdin and stdout to files, but the program hangs (doesn't quit) and only reads the file once making me unable to continue communication. I think this was because the prompt is waiting for some user input after the EOF. I thought that this could be setup as a local server, but I am uncertain how to begin accomplishing that.
I would love to find some simple way to accomplish this. Additionally, if you can think of a way to perform this, do you think there is a way to also allow for attaching or detaching to the prompt by request? Any help and ideas would be greatly appreciated.
You could create two named pipes (man mkfifo) and redirect input and output:
myprog < fifoin > fifoout
Then you could open new terminal windows and do this in one:
cat > fifoin
And this in the other:
cat < fifoout
(Or use tee to save the input/output as well.)
To dump a large input file into the program, use:
cat myfile > fifoin

Resources