DataGrip: script to export multiple queries to CSV needed - datagrip

I would like to export 5 queries from 5 different databases to 5 local csv files. Best by only starting one script doing this.
In the old days in MS Access I used multiple "DoCmd.TransferSpreadsheet" for this in one VB script. Is there something similar in DataGrip?
use db1;
exportToCsv1(select * from...);
use db2;
exportToCsv2(select * from...);
...

In DataGrip, to export query result to CSV please use Execute to File from the context menu.
The alternative way is to export the result itself.

Related

Informatica PC restart workflow with different sql query

I am using Informatica PC.
I have workflow which have sql query.
This query like "select t1, t2, t3 from table where t1 between date '2020-01-01' and date '2020-01-31'"
I need to download all data between 2020 and 2022. But I can't write it in query because I will have ABORT SESSION from Teradata.
I want to write smth, which will restart workflow with different dates automatically.
From first start take 01.2020, second start 02.2020, third start 03.2020 and etc.
How can I solve this problem?
This is a long solution and can be achieved in two ways. Using only shell script will give you lot of flexibility.
First of all parameterize your mapping with two mapping parameter. Use them in SQL like below.
select t1, t2, t3 from table where t1 between date '$$START_DT' and date '$$END_DT'
Idea is to change them at each run.
Using only shell script - Its flexible because you can handle as many run as you want using this method. You need to call this shell script using some CMD task.
Create a master file which has data like this
2020-01-01,2020-01-31
2020-02-01,2020-02-29
2020-03-01,2020-03-31
Create three informatica parameter file using above entries. First file(file1) should look like this
[folder.workflow.session_name]
$$START_DT=2020-01-01
$$END_DT=2020-01-31
Use file(file1) in a pmcmd to kick off informatica workflow. Pls add --wait so it waits for this to complete.
Loop above steps until all entries of master file are complete.
Using informatica only method - This method is not as flexible as above and applicable for only your quesion.
Create a shell script that creates three parameter file using above master file.
Create three session or three worklets which uses above three parameter files. You need to be careful to use correct parameter for correct session.
You can attach those sessions/worklets one after another or in parallel.

Writing autosys job information to Oracle DB

Here's my situation: we have no access to the autosys server other than using the autorep command. We need to keep detailed statistics on each of our jobs. I have written some Oracle database tables that will store start/end times, exit codes, JIL, etc.
What I need to know is what is the easiest way to output the data we require (which is all available in the autosys tables that we do not have access to) to an Oracle database.
Here are the technical details of our system:
autosys version - I cannot figure out how to get this information
Oracle version - 11g
We have two separate environments - one for UAT/QA/IT and several PROD servers
Do something like below
Create a table with the parameters you want to put. Put a key columns which should be auto generated. The jil column should be able to handle huge data. Also add one columns for sysdate.
Create a shell script. Inside it do as follows
"autorep -j -l0" to get all the jobs you want and put them in a file. -l0 is to ignore duplicate jobs. If a Box contain a job, then without -l0 you will get the job twice.
create a loop and read all the job names one by one.
In the loop, set varaibles for jobname/starttime/endtime/status (which all you can get from autorep -j . Then use a variable to hold jil by autorep -q -j
Append all these variable values in a flat file.
End the loop. After exiting a loop you wil end up with a file with all the job details.
Then use SQL loader to put the data in your oracle table. You can hardcode a control file and use it for every run. But the content of data file will change for every run.
Let me know if any part is not clear.

MongoDB huge bulk insert performance

I'm inserting a lot of data e.g. 1 mln documents. How should I insert them? After small tests I have a different time results for inserting all data in arrays of 500 and 1000 size (bulk). In my use case 500 is faster. Which buffer size should I use? Any suggestions?
For batch inserts like the one you are talking about it would be better to use the appropriately named mongoimport command line tool.
The mongoimport tool provides a route to import content from a JSON, CSV, or TSV export created by mongoexport, or potentially, another third-party export tool...

How to pump data to txt file using Oracle datapump?

all hope for you.
I need to export a huge table (900 fields, 1 000 000 strings) in to txt ansi file.
UTL_FILE takes a lot of time. It is not suitable in this task.
I'am trying to use Oracle Datapump, but i can't receive txt file with ansi symbols in it (only 2TTЁ©QRўҐEJЉ•).
Can anybody advice me anything.
Thank you in advance.
Oracle Data Pump can only export in its proprietary binary format.
If you want to export data to text you have only a few options:
PL/SQL or Java (stored) procedure which writes a file using UTL_FILE or the Java equivalent api.
A program running outside the database that writes to a file. Use whichever language you're comfortable with.
Pro*C might be a good choice as it is apparently much faster than the UTL_FILE approach, see http://asktom.oracle.com/pls/apex/f?p=100:11:0::::P11_QUESTION_ID:459020243348
Use a special SQL script and run it in SQL*Plus using spooling. This is the "SQL Unloader" approach, see http://www.orafaq.com/wiki/SQL*Loader_FAQ#Is_there_a_SQL.2AUnloader_to_download_data_to_a_flat_file.3F
Googling "SQL Unloader" comes up with a few ready-made solutions that you might be able to use directly or modify for your needs.

Writing to oracle logfile from unix shell script?

I am having a Oracle concurrent program which calls a UNIX shell script which will execute SQL loader program. This is used for inserting flat file from legacy to Oracle Base tables.
My question here is,
How do I capture my custom messages, validation error messages etc in the Oracle log file of the concurrent program.
All help in this regards are much appreciated.
It looks like you are trying to launch SQL*Loader from Oracle Apps. The simplest way would be to use the SQL*Loader type of executables, this way you will get the output and log files right in the concurrent requests window.
If you want to write in the log file and the output file from a unix script, you can find them in the FND_CONCURRENT_REQUESTS table (column logfile_name and outfile_name). You should get the REQUEST_ID passed as a parameter to your script.
These files should be in $XX_TOP\log and should be called l{REQUEST_ID}.req and o{REQUEST_ID}.out (apps 11.5.10).
How is your concurrent process defined? If it's using the "Host" execution method then the output should go into the concurrent log file. If it's being executed from a stored procedure, I'm not sure where it goes.
Have your script use sqlplus to sign into oracle, and insert/update the information you need.

Resources