Is there an equivalent of extract method of MySQL in Apache Derby? - derby

Is there an equivalent of extract method of MySQL in Apache Derby? Or doesn't Derby support it?

Derby does not have an exact equivalent of extract.
It has:
Second
Minute
Hour
Day
Month
Year
But you can always write your own functions:
CREATE FUNCTION TO_DEGREES(RADIANS DOUBLE) RETURNS DOUBLE
PARAMETER STYLE JAVA NO SQL LANGUAGE JAVA
EXTERNAL NAME 'java.lang.Math.toDegrees'
See also:
Overview: https://wiki.apache.org/db-derby/DerbySQLroutines
DayOfWeekExample: https://mail-archives.apache.org/mod_mbox/db-derby-user/200510.mbox/%3C43418926.6010407#sun.com%3E

Related

JDBC encoding Python: Comma delimited Pandas column names

I attempted to read in data via JDBC connection from Spark using JayDeBeApi and my pandas.read_sql contains columns with comma delimited names:
e.g. (A,p,p,l,e,s)....(P,e,a,r,s)
Df = pd.read_sql(query, jdbc_conn)
I realize this is encoding problem but the JDBC api doesn’t have encoding or option methods to set encoding like pyodbc. Is there a way to pass encoding argument to url or api?
Thanks for your help.
I had the same issue when connecting to an Oracle database from JayDeBeApi. This was because the path to the JDK was not properly set. Now that it is fixed, the parsing issue is gone.

Parameter from popup in PowerCenter?

I'm starting in the next powercenter and I doubt there is a transformation where I can enter a parameter for some command or popup ?
For example some reports for the client wants to enter the date .
Thanxs
There is no transformation in PowerCenter that can do that, because PowerCenter is mostly used as a batch processing tool.
However, if you need to change some parameters before each execution, you can do so using a parameter file.
For a different approach: simply you can have a source file where you
can put the date and you can use this source file to get the desired
date and use it ahead.

Download files from s3 into Hive based on last modified?

I would like to download a set of files who's last modified date fall within a certain time period, say 2015-5-6 to 2015-6-17. The contents of these files will be directly put into a Hive table for further processing.
I know that this is possible, but it is either for only one file, or for an entire bucket. I would like to download all files in a bucket which have a last modified within a time range.
How can multiple files be downloaded into a Hive table based on the above requirement?
Did you try with this
CREATE EXTERNAL TABLE myTable (key STRING, value INT) LOCATION
's3n://mys3bucket/myDir/* ; or
's3n://mys3bucket/myDir/filename*'(if it starts with something common)
This is possible using the AWS SDK for Java, where a custom UDF or UDTF could be made to ping the keys and return their last modified date using:
S3ObjectSummary.getLastModified();
More info: AWS Java SDK Docs - S3ObjectSummary

Running SQLLDR in DataStage

I was wondering, for folks familiar with DataStage, if Oracle SQLLDR can be used on DataStage. I have some sets of control files that I would like to incorporate into DataStage. A step by step way of accomplishing this will greatly be appreciated. Thanks
My guess is that you can run it with external stage in data stage.
You simply put the SQLLDR command in the external stage and it will be executed.
Try it and tell me what happens.
We can use ORACLE SQL Loader in DataStage .
If you check Oracle Docs there are two types of fast loading under SQL Loader
1) Direct Path Load - less validation in database side
2) Conventional Path Load
There is less validation in Direct Load if we compare to Conventional Load.
In SQL Loader process we have to specify points like
Direct or not
Parallel or not
Constraint and Index options
Control and Discard or Log files
In DataStage , we have Oracle Enterprise and Oracle Connector Stages
Oracle Enterprise -
we have load option in this stage to load data in fast mode and we can set Environment variable OPTIONS
for Oracle , example is below
OPTIONS(DIRECT=FALSE,PARALLEL=TRUE)
Oracle Connector -
We have bulk load option for it and other properties related to SQL Loader are available in properties tab .
Example - control and discard file values all set by DataStage but you can set these properties and others manually.
As you know SQLLDR basically loads data from files to database so datastage allows you to use any input data file, that would take input in any data file like sequential file, pass them format, pass the schema of the table, and it’ll create an in memory template table, then you can use a database connecter like odbc or db2 etc. and that would load your data in your table, simple as that.
NOTE: if your table does not exist already at the backend then for first execution make it create then set it to append or truncate.
Steps:
Read the data from the file(Sequential File Stage)
Load it using the Oracle Connector(You could use Bulk load so that you could used direct load method using the SQL loader and the data file and control file settings can be configured manually). Bulk Load Operation: It receives records from the input link and passes them to Oracle database which formats them into blocks and appends the blocks to the target table as opposed to storing them in the available free space in the existing blocks.
You could refer the IBM documentation for more details.
Remember, there might be some restriction in loading when it comes to handling rejects, triggers or constraints when you use bulk load. It all depends on your requirement.

How to pump data to txt file using Oracle datapump?

all hope for you.
I need to export a huge table (900 fields, 1 000 000 strings) in to txt ansi file.
UTL_FILE takes a lot of time. It is not suitable in this task.
I'am trying to use Oracle Datapump, but i can't receive txt file with ansi symbols in it (only 2TTЁ©QRўҐEJЉ•).
Can anybody advice me anything.
Thank you in advance.
Oracle Data Pump can only export in its proprietary binary format.
If you want to export data to text you have only a few options:
PL/SQL or Java (stored) procedure which writes a file using UTL_FILE or the Java equivalent api.
A program running outside the database that writes to a file. Use whichever language you're comfortable with.
Pro*C might be a good choice as it is apparently much faster than the UTL_FILE approach, see http://asktom.oracle.com/pls/apex/f?p=100:11:0::::P11_QUESTION_ID:459020243348
Use a special SQL script and run it in SQL*Plus using spooling. This is the "SQL Unloader" approach, see http://www.orafaq.com/wiki/SQL*Loader_FAQ#Is_there_a_SQL.2AUnloader_to_download_data_to_a_flat_file.3F
Googling "SQL Unloader" comes up with a few ready-made solutions that you might be able to use directly or modify for your needs.

Resources