use BTEQ to schedule a query in windows - windows-7

Hi I'm pretty new using BTEQ,
I'm looking to schedule a query that runs using teradata connection, the query results should go to an excel or txt with separators (so it can be formatted using excel)
I need to to this through windows, so I guess it should be a *.bat scheduled using windows scheduler
The things is that I don't have a clue how to open the connection, running the query and exporting the result to a *.xls or *.csv or *.txt
I have set already the ODBCs to connect to TD (I use the TD administrator and run the query manually everyday).
Any ideas?

BTEQ doesn't use ODBC but a CLI connection.
You can simply create a script like this:
.logon TDPID/username,password;
.OS if exist bla.txt del bla.txt; -- remove the file if it exists (otherwise BTEQ appends)
.export report file = bla.txt;
SELECT
TRIM(DataBaseName) || ',' ||
TRIM(TableName) || ',' ||
TRIM(Version) || ',' ||
TRIM(TableKind) || ',' ||
TRIM(ParentCount) (TITLE '')
FROM dbc.TablesV
SAMPLE 10
;
.export reset;
.EXIT;
TDPID is the name of your TD system (or an IP-address).
You need to manually format the csv in the SELECT as shown above using TRIM and || and you have to take care of possible NULLs using COALESCE(TRIM(col), '').
You might also try the ancient DIF format, no need to care about NULLs, etc.
.export DIF file = bla.dif;
SELECT
DataBaseName
,TableName
,Version
,TableKind
,ParentCount
FROM dbc.TablesV
SAMPLE 10
;
In TD14 there's a table UDF named CSV which takes care for NULLs and quoting strings instead of TRIM/COALESCE/||. Thy syntax is a bit lengthy, too:
WITH cte
(
DataBaseName
,TableName
,Version
,TableKind
,ParentCount
)
AS
(
SELECT
DataBaseName
,TableName
,Version
,TableKind
,ParentCount
FROM dbc.TablesV
SAMPLE 10
)
SELECT *
FROM TABLE(CSV(NEW VARIANT_TYPE
(
cte.DataBaseName
,cte.TableName
,cte.Version
,cte.TableKind
,cte.ParentCount
), ',', '"')
RETURNS (op VARCHAR(32000) CHARACTER SET UNICODE)) AS t1;
Finally you run BTEQ and redirect the file (you can put this is an BAT file):
BTEQ < myscript.txt
There might be other options, too, e.g. a TPT/FastExport script or putting the SQL inside an Excel file which automatically runs the query when opened...

Related

Unequal length between strings after writing to a file - using same delimiter (tab)

I have a short procedure in PL/SQL which uses UTL_FILE package to create and then write to a .txt file.
Tab is saved in its own variable (v_delimiter), declared as varchar2(5), with a value of chr(9).
The header is saved as a string, concatenated with the v_delimiter and then written to a file.
After that, the rest of the data from an explicit cursor is also written to a file, line by line.
In the end, when I open the txt file, there are unequal widths between some of the strings which make up a header. There are also unequal widths between some of the data from the cursor inside a final .txt file and I guess that shouldn't be since I am using one and the same delimiter (tab) to create a header and to create a string from a cursor.
I am using UTL_FILE.put_line_nchar function to write a Unicode line to a file.
I tried without declaring a delimiter as a variable, using literally chr(9) when concatenating and it is always the same result.
I am out of ideas why is this happening in the final txt file.
v_file_handle UTL_FILE.file_type ;
v_output_path VARCHAR2 (100) := '/Path/to/File' ;
v_file_header VARCHAR2 (32767) ;
v_delimiter VARCHAR2 (5) := chr(9) ;
v_file_handle := UTL_FILE.fopen_nchar (v_output_path,'string_1' || TO_CHAR (SYSDATE, 'dd_mm_yyyy') || '.txt','w', 32767); -- opening
v_file_header :='claimFileIdentifier'|| v_delimiter || 'claimFileOpenedDate'
|| v_delimiter|| 'claimStatus'|| v_delimiter|| 'claimStatusDate'|| v_delimiter|| 'incidentDateTime'|| v_delimiter|| 'incidentPlace'|| v_delimiter|| 'calculationType' ... -- header
UTL_FILE.put_line_nchar ( v_file_handle, v_file_header ) ; --writing header to a file
FOR rec IN cursor_candidates --iterating over a cursor
LOOP
UTL_FILE.put_line_nchar (
v_file_handle,
rec.claimFileIdentifier
|| v_delimiter
|| rec.claimFileOpenedDate
|| v_delimiter
|| rec.claimStatus
|| v_delimiter
|| rec.claimStatusDate
|| v_delimiter
|| rec.incidentDateTime
|| v_delimiter
|| rec.incidentPlace
|| v_delimiter ... ) ; -- writing cursor rows to a file
END LOOP ;
UTL_FILE.fclose ( v_file_handle );
Final txt file and unequal width between certain strings
That's kind of expected, in my opinion. Values are separated by the TAB character, but it doesn't mean that output will look "nice" when you look at it as a text file.
For example, following values are separated by TAB, but they look ugly:
a b c
Littlefoot Scott Tiger
If you e.g. imported that file into Excel and set TAB as column separator, every value would be in its own column and output would look pretty.
If you wanted text file to look nice as well, you'll have to use a different approach, e.g. LPAD numeric values (IDs, salaries, ...), RPAD textual strings (names, addresses, ...), possibly SUBSTR (to cut long values short).
This relating to the functionality of the client application that you use to open the output tab-separated values (TSV) file.
If I have the data:
longtitle longtitle2 longtitle3
a b c
1234567890 123456789010234545 1234567788
and I open it in a basic text editor (such as Notepad) then the output is:
And the columns are unequal across the rows.
However, if I display the same data in an editor that supports TSV files (such as Notepad++) then it displays the same output with equal widths for the columns across the rows:
Importing the data into a spreadsheet application (such as Excel or OpenOffice), then the TSV is separated into cells and the output is:
And, again, the information is split into columns.
Do not change how you are outputting the file; instead, find a better way of viewing the output file with an application that supports formatting tab-separated values.

How do I use double pipes || with heredoc?

I am trying to accomplish a fairly simple goal: react to the possible error of a previous command with a second command. The wrench in the spokes is the need to use a heredoc syntax in the first command.
this (trivialized example) would produce the result I want to catch:
psql -c "select * from table_that_doesnt_exist" || echo "error"
except, the SQL I need to execute is multiple commands and for my circumstance, I must do this with heredocs:
psql << SQL
select * from good_table;
select * from table_that_doesnt_exist
SQL
and when trying to successfully read the stderr from this type of configuration (I've tried a million ways) I cannot seem to figure it out. These kind of methods do not work:
( psql << SQL
select * from good_table;
select * from table_that_doesnt_exist
SQL
) || echo "error"
or
psql << SQL || echo "error"
select * from good_table;
select * from table_that_doesnt_exist
SQL

Check and delete hidden characters in file

I had a file that I was processing and it was constantly giving me errors. After some checking I realized that it had some special characters that were hidden.
I have just manually found the hidden characters and made a simple replace like
REPLACE( String,'', '')
How could I prevent this from happening in the future?
I have tried to make a table that stores these hidden ascii characters which are in the range from 125-255 but the database does not store them accordingly.
For example chr(168) is not the same as ascii 168.
select chr('168'),
convert(chr('168'),
'US7ASCII',
'WE8ISO8859P1')
from dual;
What else can I try?
Easy to write, but not the fastest to execute - using regexp_replace:
update table_name
set column_name =
regexp_replace(column_name, '[' || chr(125) || '-' || chr(255) || ']', '')
;
More efficient, but a pain in the neck to write and maintain (you can write some code to generate the code though, to save some typing):
...
set column_name =
translate(column_name, '~' || chr(125) || chr(126) || ..... || chr(255), '~')
;
Yes, with translate() you will have to spell them all out, no "range" concept... {:-(
That's to change the data already in the tables. It would be even better to fix the data as it is coming in, if you can - using a similar transformation.

Output key, value pairs with sqlplus

I am using sqlplus to output the content of a table :
select * from my_table;
The output is similar to this :
ID
-------------
FIELD1
----------------
FIELD2
----------------
someid
field 1 content
field 2 content
I tried many combinations of the page formatting options. I can format the output in a table by setting head, pages, termout, echo, feed and linesize.
But I would like to output the values in a (key, value) fashion, like this :
ID = someid
FIELD1 = field 1 content
FIELD2 = field 2 content
Delimiter and formatting is not important, as long as I have one column per line.
Is it possible using only sqlplus ? I would like to avoid scripting this. On this particular machine (an appliance I can't install anythin on) I only have bash, perl and Python 2.4.
I don't know of any SQLPlus settings that will get you this format, but can do it by brute force. Something like this should work:
SELECT
'ID = ' || id || CHR(10) ||
'FIELD1 = ' || field1 || CHR(10) ||
'FIELD2 = ' || field2
FROM my_table
Addendum OP asked if it's possible to make CHR(10) the COLSEP value in SQLPlus. I initially thought "no way", but stumbled across this StackOverflow answer on how to set the COLSEP to a tab, and modified it to do CHR(10). I don't quite understand how/why it works, but it does work:
col NEWLINE# new_value NEWLINE NOPRINT
select chr(10) NEWLINE# from dual;
set colsep "&NEWLINE"

ORACLE - Exporting Procedures / Packages to a file

I would like to programmatically export my Procedures / Functions and Packages into individual files (as a backup) and using Oracle 9.2.
The closest solution i found was using DBMS_METADATA.GET_DDL , but how do i output the CLOB to a text file, without losing any parts (due to length or indentation) ?
Or maybe do you have other solutions to backup packages or other functions individually (only the one i want, not all of them) ?
Thanks
Trying to get CLOBS (and LONGS) from command line utilities like SQL*Plus always seems to give me formatting/truncation problems. My solution was to write a simple utility in a non- type checking language (Perl) that uses DBMS_METADATA to bring the CLOB back into a string.
Snippet:
...
$sthRef = $dbhRef->prepare("select dbms_metadata.get_ddl(?,?) from dual");
$sthRef->execute('PACKAGE', $thisName);
while (($thisDDL) = $sthRef->fetchrow()) {
print $thisDDL;
}
$sthRef->finish;
...
If you want to get the DDL, there really is no way except DBMS_METADATA like you already said.
Usually, this kind of a backup is done with exp (or expdp), although this doesn't create a SQL file like you would get with most other DBMS systems.
SET pages 0
spool proclist.sql
SELECT
CASE line
WHEN 1 THEN
'CREATE OR REPLACE ' || TYPE || ' ' || NAME || CHR(10) || text
ELSE
text
END
FROM user_source
WHERE TYPE IN ( 'PROCEDURE','FUNCTION')
ORDER BY name, line;
spool OFF
exit
Thanks goes for RAS , guest for his answer ,
I needed to get codes for some procedures only, so I tried the code , to find that this code truncate the code after procedure name in first line of the code and replace it with three dots '...'
so I changed the code to the following:
SELECT CASE line
WHEN 1 THEN 'CREATE OR REPLACE ' -- || TYPE || ' ' || NAME || --CHR(10) || ' ('
|| text
ELSE
text
END
FROM user_source
WHERE TYPE IN ( 'PROCEDURE') and name like 'SomeThing%'
ORDER BY name, line;
and this page
export procedures & triggers
have a very usefaul code:
connect fred/flintstone;
spool procedures_punch.lst
select
dbms_metadata.GET_DDL('PROCEDURE',u.object_name)
from
user_objects u
where
object_type = 'PROCEDURE';
spool off;
Final way to do it by using Toad Schema Browser , then select all the needed procedures and mouse right click then select export from the menu.

Resources