I've got a series of linqpad queries (c# programs) that I use to change values in a database. Currently, I need to open them all up one at a time, and run them manually.
I need to be able to just run the script, and have it execute all of the queries.
I have a folder structure something like this
A
query1
query2
B
query3
query4
C
query5
query6
What kind of script would I use to run this (bash, powershell, something inherent in Linqpad)?
There is another approach
string htmlResult = Util.Run ("test.linq", QueryResultFormat.Html).AsString()
It seems linqpad has built in scripting support, using lprun on the command line. I was having trouble with this as you need to use lprun6 (I am using linqpad 6), instead.
Related
I'm just stepping into the world of PASE and Qshell on the IBM i, and am still learning the "shell way" of solving problems.
I'm trying to write a shell script that will delete all logical files in a given library that are associated with a given physical file. The OS is IBM i 7.2. In QSYS-land, I'd probably DSPDBR on the physical file to an outfile, then read through the outfile and delete each dependent file. How would you do this in PASE or Qshell? I had a couple ideas, but they all seem overly-complicated, and the more I learn about shell scripting, the more shortcuts I'm finding.
My first idea was to basically replicate the above process, doing something like this and then somehow using the output of the SELECT:
system "DSPDBR FILE(MYLIB/MYFILE) OUTPUT(*OUTFILE) OUTFILE(QTEMP/DSPDBR)"
db2 "select WHREFI from QTEMP/DSPDBR where WHRELI = 'MYLIB'"
(I see now that QTEMP doesn't really work as a temporary library, but maybe there's a way around this.)
My second idea was to do something like pipe the output of the DSPDBR statement into something like awk to pick out the logical file names, and redirect the output of that to a stream file (or shell variable?). And from there, somehow use this list to delete the logical files.
Is there a more straightforward approach? It seems like whatever the answer is, it will be a pattern that is often repeated when writing shell scripts to interact with QSYS commands and objects.
First, you might review Running SQL queries from PASE instead of QSH. The db2 utility is part of Qshell and not PASE. Unfortunately, the processing behind it is based in ILE, so it's not directly usable within the PASE (AIX run-time) environment. The linked question provides a method of bridging between the two.
However, directly in QShell, you can experiment with something like this:
db2 "SELECT substr(VIEW_NAME,1,18), substr(OBJECT_NAME,1,10),
substr(OBJECT_SCHEMA,1,10), substr(VIEW_SCHEMA,1,10),
substr(TABLE_NAME,1,18)
FROM qsys2.sysviewdep
WHERE OBJECT_SCHEMA = '<yourSchemaName>'"
The SUBSTR() functions might or might not be useful. It depends on your name lengths and whether you want them limited or not. The output can be redirected to an outfile or perhaps piped into sed or another utility for manipulation.
A QTEMP library is specific to [is scoped to] the process that implements the request in the shell. Ensure all of the requests that depend on the same QTEMP library will run in the same process; the system utility runs in a separate process, as does the db2 utility. For example, in the following command-line invocation of the QSHell, all of the requests dependent on QTEMP will run in the same process in which the db2 utility runs; the db2 utility runs a script [or could instead run a dynamic compound statement] that in this scenario was generated by the scripted requests for\within the shell utility:
qsh cmd('
echo "call qsys2.qcmdexc
(''dspdbr mylib/myfile output(*outfile) outfile(qtemp/dspdbr)'')
" >> mydb2.script
; echo "select WHREFI from QTEMP.DSPDBR where WHRELI =''MYLIB''
" >> mydb2.script
; db2 -f mydb2.script
; rm mydb2.script
')
There is an open source equivalent of the QShell db2 command: https://bitbucket.org/litmis/db2util
Is there any way of adding "helper" methods to the mongo shell that loads each time you use it?
Basically whenever you want to query by _id, you have to do something like this.
db.collectionName.findOne({_id: ObjectId('THIS-IS-AN-OBJECTID')})
Whenever I'm going to be doing a lot of command line commands, I alias the ObjectId function to make it easier to type.
var ob = ObjectId;
db.collectionName.findOne({_id: ob('AN-OBJECTID')})
db.collectionName.findOne({_id: ob('ANOTHER-ONE')})
db.collectionName.findOne({_id: ob('ANOTHER')})
It would be pretty chill if there was a way of either running a specified piece of JS / add a chunk of code that runs each time mongo is pulled up from the shell. I checked out MongoDB's CLI documentation, but didn't see anything like that available, so I figured I would ask here.
I know there is a possibility of using this nefariously, so this might be a situation where it might be unsupported by the mongo shell by default. This might be a situation where we can create a helper bash script of some sort to launch the shell, then inject keyboard input to create the helper ob function? Not sure how this could be tackled personally, but would love some insight on how to do something like this, either natively or through a helper script of some sort.
If you want code to execute every time you launch the shell, then whatever you place in .mongorc.js will be run on launch:
.mongorc.js File
When starting, mongo checks the user’s HOME directory for a JavaScript file named .mongorc.js. If found, mongo interprets the content of .mongorc.js before displaying the prompt for the first time. If you use the shell to evaluate a JavaScript file or expression, either by using the --eval option on the command line or by specifying a .js file to mongo, mongo will read the .mongorc.js file after the JavaScript has finished processing. You can prevent .mongorc.js from being loaded by using the --norc option.
So simply define your variable association there.
You could also supply a file of your choice along with the --shell option to let the command know you want the shell opened on completion of any instructions contained:
mongo --shell file_with_javascript.js
But as mentioned, the .mongorc.js file would still be called (if present) unless the --norc option was also specified.
Is there a way to automatically run a pig script when invoking pig from command line?
The reason I'm wondering about this is that I have several import and define statements that I use constantly over and over to set everything up. Is it possible to define this collection of statements somewhere so that when I start pig, it will automatically execute those lines? I apologize in advance if this is something trivial that I missed from the documentation.
yes you can certainly do so from version 0.11 onwards.
You need to use .pigbootup file.
Here is a nice blogpost on setting up the pigbootup file
http://hadoopified.wordpress.com/2013/02/06/pig-specify-a-default-script/
If you want to include Pig-Macros from a file you can use the import command
Take a look at http://pig.apache.org/docs/r0.9.1/cont.html#import-macros for reference
New to pig.
I'm loading data into a relation like so:
raw_data = LOAD '$input_path/abc/def.*;
It works great, but if it can't find any files matching def.* the entire script fails.
Is here a way to continue with the rest of the script when there are no matches. Just produce an empty set?
I tried to do:
raw_data = LOAD '$input_path/abc/def.* ONERROR Ignore();
But that doesn't parse.
You could write a custom load UDF that returns either the file or an empty tuple.
http://wiki.apache.org/pig/UDFManual
No, there is no such feature, at least the one that I've heard of.
Also I would say that "producing an empty set" is "not running the script at all".
If you don't want to run a Pig script under some circumstances then I recommend using wrapper shell scripts or Pig embedding:
http://pig.apache.org/docs/r0.11.1/cont.html
We need to do some queries of a Mongo DB from BASH shell scripts. Using eval and Mongo's printjson() gives me text output, but it needs to be parsed. Using other scripting languages (Python, Ruby, Erlang, etc) is not an option.
I looked at JSON.sh ( a BASH script lib JSON parser: https://github.com/rcrowley/json.sh ) and it appears to be close to a solution other than the issue that it does not recognize BSON-but-not-JSON data types. Before I try to mod it to recognize BSON data types, is anyone aware of an existing solution?
Thanks.
10/11 Below Stennie notes that I have received an answer in the MongoDB User group, and provides a URL. The answer is very nice and complete, and begins, "MongoDB actually uses what we call Mongo Extended JSON which differs a bit from the vanilla JSON standard..." so I will have to modify the parser. Thanks to all.
Do you perhaps want to use tojson() rather than printjson() and loop through the result of tojson() to parse the fields?