Insert xml file into XMLTable using JDBC - jdbc

I have created a XMLType table in Oracle. I am trying to insert an XML file into the table using JDBC. It is throwing -
ORA-00932: inconsistent datatypes: expected - got BINARY
The code is -
OraclePreparedStatement statement = (OraclePreparedStatement) getConnection().prepareStatement
("insert into person values(?)");
FileInputStream fileinp = new FileInputStream(file);
statement.setBinaryStream(1, fileinp, fileLength);
statement.executeUpdate();

You're trying to insert binary data directly into your XMLType column, and there is no implicit casting for that. Assuming your file is actually text you can treat is as a CLOB rather than a BLOB:
OraclePreparedStatement statement =
(OraclePreparedStatement) getConnection().prepareStatement(
"insert into person values(xmltype(?))");
FileInputStream fileinp = new FileInputStream(file);
InputStreamReader filerdr = new InputStreamReader(fileinp);
pStmt.setCharacterStream(1, filerdr, fileLength);
pStmt.executeUpdate();
Note that the statement is now using xmltype(?) (although it works without that as there is implicit casting from CLOB, but I think it's better to be explicit anyway); and I'm using an InputStreamReader to pass the text in. You can, and probably should, use a buffered reader:
FileInputStream fileinp = new FileInputStream(file);
InputStreamReader filerdr = new InputStreamReader(fileinp);
BufferedReader filebuf = new BufferedReader(filerdr);
pStmt.setCharacterStream(1, filebuf, fileLength);
pStmt.executeUpdate();
Tested with an XMLType column in a normal table, and with an XMLType table.
Passing a text file with setBinaryStream confuses XMLType; with the same valid file, using setBinaryStream() gets error:
ORA-31011: XML parsing failed
ORA-19202: Error occurred in XML processing
LPX-00210: expected '<' instead of '3'
There's probably a way around that, but I'm assuming your file is just text and won't be a problem as a CLOB.

As I see in the official documentation here you can insert an XMLType in Java in one of two ways:
By CLOB or string binding
By setObject() or setOPAQUE() using
I would read the file to a string and then use it to the insert query

Related

Can I control how Oracle maps the integer types in ADO.NET?

I've got a legacy database that was created with the database type INTEGER for many (1.000+) Oracle columns. A database with the same structure exists for MS SQL. As I was told, the original definition was created using a tool that generated the scripts from a logical model to the specific one for MS SQL and Oracle.
Using C++ and MFC the columns were mapped nicely to the integer type for both DBMs.
I am porting this application to .NET and C#. The same C# codebase is used to access both MS SQL and Oracle. We use the same DataSets and logic and we need the same types (int32 for both).
The ODP.NET driver from Oracle maps them to Decimal. This is logical as Oracle created the integer columns as NUMBER(37) automatically. The columns in MS SQL map to int32.
Can I somehow control how to map the types in the ODP.NET driver? I would like to say something like "map NUMBER(37) to int32". The columns will never hold values bigger than the limits of an int32. We know this because it is being used in the MS SQL version.
Alternatively, can I modify all columns from NUMBER(37) to NUMBER(8) or SIMPLE_INTEGER so that they map to the right type for us? Many of these columns are used as primary keys (think autoincrement).
Regarding type mapping, hope this is what you need
http://docs.oracle.com/cd/E51173_01/win.122/e17732/entityDataTypeMapping.htm#ODPNT8300
Regarding type change, if table is empty, you may use following script (just replace [YOUR_TABLE_NAME] with table name in upper case):
DECLARE
v_table_name CONSTANT VARCHAR2(30) := '[YOUR_TABLE_NAME]';
BEGIN
FOR col IN (SELECT * FROM user_tab_columns WHERE table_name = v_table_name AND data_type = 'NUMBER' AND data_length = 37)
LOOP
EXECUTE IMMEDIATE 'ALTER TABLE '||v_table_name||' MODIFY '||col.column_name||' NUMBER(8)';
END LOOP;
END;
If some of these columns are not empty, then you can't decrease precision for them
If you have not too much data, you may move it to temp table
create table temp_table as select * from [YOUR_TABLE_NAME]
then truncate original table
truncate [YOUR_TABLE_NAME]
then run script above
then move data back
insert /*+ append */ into [YOUR_TABLE_NAME] select * from temp_table
commit
If data amount is substantial it is better to move it once. In such case it is faster to create new table with correct datatypes and all indexes, constraints and so on, then move data, then rename both tables to make new table have proper name.
Unfortunately the mapping of numeric types between .NET and Oracle is hardcoded in OracleDataReader class.
In general I usually prefer to setup appropriate data types in the database, so if possible I would change the column datatypes because they better represent the actual values and their constraints.
Another option is to wrap the tables using views casting to NUMBER(8) but will negatively impact execution plans because it prohibits index lookups.
Then you have also some application implementation options:
Implement your own data reader or subset of ADO.NET classes (inheriting from DbProviderFactory, DbConnection, DbCommmand, DbDataReader, etc. and wrapping Oracle classes), depending on how complex is your implementation. Oracle.DataAccess, Devart and all providers do exactly the same because it gives total control over everything including any magic with the data types. If the datatype conversion is the only thing you want to achieve, most of the implementation would be just calling wrapped class methods/properties.
If you have access to OracleDataReader after command is executed and before you start to read it you can do a simple hack and set the resulting numeric type using reflection (following implementation is just simplified demonstration).
However this will not work with ExecuteScalar as this method never exposes the underlying data reader.
var connection = new OracleConnection("DATA SOURCE=HQ_PDB_TCP;PASSWORD=oracle;USER ID=HUSQVIK");
connection.Open();
var command = connection.CreateCommand();
command.CommandText = "SELECT 1 FROM DUAL";
var reader = command.ExecuteDatabaseReader();
reader.Read();
Console.WriteLine(reader[0].GetType().FullName);
Console.WriteLine(reader.GetFieldType(0).FullName);
public static class DataReaderExtensions
{
private static readonly FieldInfo NumericAccessorField = typeof(OracleDataReader).GetField("m_dotNetNumericAccessor", BindingFlags.NonPublic | BindingFlags.Instance);
private static readonly object Int32DotNetNumericAccessor = Enum.Parse(typeof(OracleDataReader).Assembly.GetType("Oracle.DataAccess.Client.DotNetNumericAccessor"), "GetInt32");
private static readonly FieldInfo MetadataField = typeof(OracleDataReader).GetField("m_metaData", BindingFlags.NonPublic | BindingFlags.Instance);
private static readonly FieldInfo FieldTypesField = typeof(OracleDataReader).Assembly.GetType("Oracle.DataAccess.Client.MetaData").GetField("m_fieldTypes", BindingFlags.NonPublic | BindingFlags.Instance);
public static OracleDataReader ExecuteDatabaseReader(this OracleCommand command)
{
var reader = command.ExecuteReader();
var columnNumericAccessors = (IList)NumericAccessorField.GetValue(reader);
columnNumericAccessors[0] = Int32DotNetNumericAccessor;
var metadata = MetadataField.GetValue(reader);
var fieldTypes = (Type[])FieldTypesField.GetValue(metadata);
fieldTypes[0] = typeof(Int32);
return reader;
}
}
I implemented extension method for command execution returning the reader where I can set up the desired column numeric types. Without setting the numeric accessor (it's just internal enum Oracle.DataAccess.Client.DotNetNumericAccessor) you will get System.Decimal, with accessor set you get Int32. Using this you can get all Int16, Int32, Int64, Float or Double.
columnNumericAccessors index is a column index and it will applied only to numeric types, if column is DATE or VARCHAR the numeric accessor is just ignored. If your implementation doesn't expose the provider specific type, make the extension method on IDbCommand or DbCommand and then safe cast the DbDataReader to OracleDataReader.
EDIT: Added the hack for GetFieldType method. But it might happen that the static mapping hashtable might be updated so this could have unwanted effects. You need to test it properly. The fieldTypes array holds the types returned for all columns of the data reader.

JDBC, Oracle SQL, adding and displaying picture on label

I have picture 'osta.jpg' on my desktop and I want to add this to my database Oracle SQL and then load this picture from database in Java application (JDBC) and show on label by a JLabel lblFilm = new JLabel(new ImageIcon( "picture_from_SQL") (Swing library)
I try adding image to database:
ALTER TABLE film ADD bfile_loc bfile, bfile_type varchar2(4);
UPDATE film SET bfile_type = 'JPEG';
UPDATE film SET bfile_loc = bfilename('GIF_FILES','C:\Users\Maciej\Desktop\osta.jpg') WHERE kategoria IN 'thriller';
However, I don't know how load it in the Java application.
First things first, in order to create a valid BFILE you need to specify an Oracle directory and then the location of the file within that directory. I don't know what your GIF_FILES directory is, but if it were created using
CREATE DIRECTORY GIF_FILES AS 'C:\Users\Maciej\Desktop';
then you would use the following to set the BFILE:
UPDATE film SET bfile_loc = bfilename('GIF_FILES', 'osta.jpg') WHERE kategoria IN 'thriller';
Secondly, in order to read the BFILE out of Oracle using JDBC, you need to use some Oracle-specific classes. Firstly, you need to cast the ResultSet to oracle.jdbc.OracleResultSet, so you can use the getBFILE() method to get the BFILE locator from the ResultSet. Then open the BFILE, get an InputStream from the BFILE's binaryStreamValue() method and read the data out of that.
Here's some example code (error handling could be improved):
import java.io.*;
import java.sql.*;
import oracle.sql.BFILE;
import oracle.jdbc.OracleResultSet;
// ...
// 'connection' here is your Oracle database connection.
Statement stmt = connection.createStatement();
OracleResultSet rSet = (OracleResultSet)stmt.executeQuery(
"SELECT bfile_loc FROM film");
if (rSet.next()) {
BFILE bfile = rSet.getBFILE(1);
System.out.println("Length: " + bfile.length());
bfile.open();
InputStream is = bfile.binaryStreamValue();
// Read data from input stream...
is.close();
bfile.close();
}
In the code above, I also fetch the length of the file, which isn't essential but it will fail if the file cannot be found.
Thirdly, you need to convert the InputStream to an ImageIcon. The ImageIcon class has a constructor that takes a byte array, and you can use answers to this question to convert the InputStream into a byte array.

Extracting timestamp field in Oracle vs MySQL from Grails-Groovy

I am using grails/groovy, and from my controller I am currently doing this for retrieving field from Mysql table containing datetime field
SimpleDateFormat Sformat = new SimpleDateFormat("yyyy-MM-dd");
String format_datenow = Sformat.format(new Date());
String format_dateprevious = Sformat.format(new Date() -31);
String markerCalcQuery =
"select sum(trans_cnt) as t_cnt, location from map2_data where fdate between '"+format_dateprevious+"' and '"+format_dateprevious+"' and res_id = "+res_id+" group by map2_data.location";
res_row=gurculsql.rows(markerCalcQuery);
The above query fails on Oracle11g with error
ORA-01843: not a valid month.
The error I feel is because MySQL stores date in this format: 2011-12-28 02:58:26 and Oracle stores date like this: 28-DEC-11 02.58.26.455000000 PM
How do I make the code generalised, one way is to make the database in Oracle store the date in the same format which I am thinking the way to handle this rather than from the code. If yes, how to change date format in the Oracle db?
Can I specify the format in the grails domain class for map2_data so that no matter what database it is we will have the datetime in the same format.
For several reasons (one being to code database independent - which is basically what you'd need ;-)), it is better to avoid creating SQL statements in your code. Try to use the Grails criteria DSL, e.g. something like
def criteria = YourDomainObject.createCriteria()
criteria.get {
between ('fdate', new Date()-31, new Date())
projections {
sum('trans_cnt')
groupProperty('location')
}
}
(ontested, but should help you get started).
If for some reason you can't use the criteria API, try the fallback to HQL (Hibernate Query Language). I'd always try to avoid to write plain SQL.
In Oracle, dates have their own type, they aren't strings. If you have a string, you should convert it to a date using the TO_DATE function.
String format_datenow = "TO_DATE('" + Sformat.format(new Date()) + "', 'YYYY-MM-DD')";
To make it work also in MySQL, you can create a stored function named TO_DATE that just returns its first argument.

Oracle db gives ORA-01722 for seemingly NO REASON AT ALL

I'm trying to use an Oracle database with ado.net, and it is proving a painful experience. I use Oracle Client (Oracle.Data namespaces).
The following query runs fine from a query window:
UPDATE PRINT_ARGUMENT
SET VALUE = 'Started'
WHERE REQUEST_ID = 1 AND KEYWORD = '{7D066C95-D4D8-441b-AC26-0F4C292A2BE3}'
When I create an OracleCommand however the same thing blows up with ORA-01722. I can't figure out why.
var cmd = cnx.CreateCommand();
cmd.CommandText = #"
UPDATE PRINT_ARGUMENT
SET VALUE = :value
WHERE REQUEST_ID = :requestID AND KEYWORD = :key";
cmd.Parameters.Add(new OracleParameter("requestID", (long)1);
cmd.Parameters.Add(new OracleParameter("key", "{7D066C95-D4D8-441b-AC26-0F4C292A2BE3}");
cmd.Parameters.Add(new OracleParameter("value", "Started");
cnx.Open();
try { int affected = cnx.ExecuteNonQuery(); }
finally { cnx.Close(); }
When I inspect the command in the debugger, the parameters appear to have mapped to the correct types: requestID has OracleDbType.Int64, key and value are both OracleDbType.Varchar2. The values of the parameters are also correct.
This gets even stranger when you consider that I have other queries that operate on the exact same columns (requestID, keyword, value) using the same approach - and they work without a hiccup.
For the record, the column types are requestID NUMBER(10,0); key VARCHAR2(30); value VARCHAR2(2000).
According to Oracle, ORA-01722 'invalid number' means a string failed to convert to a number. Neither of my string values are numbers, neither of the OracleParameters created for them are numeric, and neither
By default, ODP.NET binds parameters by position, not by name, even if they have actual names in the SQL (instead of just ?). So, you are actually binding requestID to :value, key to :requestID and value to :key.
Correct the order of cmd.Parameters.Add in your code, or use BindByName to tell ODP.NET to use the parameter names.
Since you are using named parameters, you have to tell the Oracle client about it. Otherwise your parameters are mixed up (key is assigned to :value):
OracleParameter parameter = new OracleParameter("requestID", (long)1);
parameter.BindByName = true;
cmd.Parameters.Add(parameter);
It's a strange and unexpected behavior, but that's how it is.

How to use variable mapping while using Oracle OLE DB provider in SSIS?

How to use variable mapping while using Oracle OLE DB provider? I have done the following:
Execute SQL Task: Full result set to hold results of the query.
Foreach ADO Enumerator: ADO object source above variable (Object data type).
Variable Mapping: 1 field.
The variable is setup as Evaluate as an Express (True)
Data Flow: SQL Command from variable, as SELECT columnName FROM table where columnName = ?
Basically what I am trying to do is use the results of a query from a SQL Server table, (ie ..account numbers) and pull records from Oracle reference the results from the SQL query
It feels like you're mixing items. The Parameterization ? is a placeholder for a variable which, in an OLE DB Source component, you'd click on the Parameters button and map.
However, since you're using the SQL Command from a Variables, that doesn't allow you to use the Parameterization option, probably because the risk of a user changing the shape of the result set, via Expressions, is too high.
So, pick one - either "SQL Command" with proper parametetization or "SQL Command from Variable" where you add in your parameters in terrible string building fashion like Dynamically assign value to variable in SSIS SQL Server 2005/2008/2008R2 people, be aware that you are limited to 4k characters in a string variable that uses Expressions.
Based on the comment of "Basically what I am trying to do is use the results of a query from a SQL Server table, (ie ..account numbers) and pull records from Oracle reference the results from the SQL query"
There's two ways of going about this. With what you've currently developed, my above answer still stands. You are shredding the account numbers and using those as the filter in your query to Oracle. This will issue a query to Oracle for each account number you have. That may or may not be desirable.
The upside to this approach is that it will allow you to retrieve multiple rows. Assuming you are pulling Sales Order type of information, one account number likely has many sales order rows.
However, if you are working with something that has a zero to one mapping with the account numbers, like account level data, then you can simplify the approach you are taking. Move your SQL Server query to an OLE DB Source component within your data flow.
Then, what you are looking for is the Lookup Component. That allows you to enrich an existing row of data with additional data. Here you will specify a query like "SELECT AllTheColumnsICareAbout, AccountNumber FROM schema.Table ". Then you will map the AccountNumber from the OLE DB Source to the one in the Lookup Component and the click the checkmark next to all the columns you want to augment the existing row with.
I believe what you are asking is how to use SSIS to push data to Oracle OleDb provider.
I will assume that Oracle is the destination. The idea of using data destinations with variable columns is not supported out of the box. You should be able to use the SSIS API or other means, I take a simpler approach.
I recently set up a package to get all tables from a database and create dynamic CSV output. One file for each table. You could do something similar.
Switch out the streamwriter part with a section to 1. Create the table in destination. 2. Insert records into Oracle. I am not sure if you will need to do single inserts to Oracle. In another project that works in reverse, dynamic csv into SQL. SInce I work with SQL server, I load a datatable and use SQLBulkCopy class to use bulk loading which provides excellent performance.
public void Main()
{
string datetime = DateTime.Now.ToString("yyyyMMddHHmmss");
try
{
string TableName = Dts.Variables["User::CurrentTable"].Value.ToString();
string FileDelimiter = ",";
string TextQualifier = "\"";
string FileExtension = ".csv";
//USE ADO.NET Connection from SSIS Package to get data from table
SqlConnection myADONETConnection = new SqlConnection();
myADONETConnection = (SqlConnection)(Dts.Connections["connection manager name"].AcquireConnection(Dts.Transaction) as SqlConnection);
//Read data from table or view to data table
string query = "Select * From [" + TableName + "]";
SqlCommand cmd = new SqlCommand(query, myADONETConnection);
//myADONETConnection.Open();
DataTable d_table = new DataTable();
d_table.Load(cmd.ExecuteReader());
//myADONETConnection.Close();
string FileFullPath = Dts.Variables["$Project::ExcelToCsvFolder"].Value.ToString() + "\\Output\\" + TableName + FileExtension;
StreamWriter sw = null;
sw = new StreamWriter(FileFullPath, false);
// Write the Header Row to File
int ColumnCount = d_table.Columns.Count;
for (int ic = 0; ic < ColumnCount; ic++)
{
sw.Write(TextQualifier + d_table.Columns[ic] + TextQualifier);
if (ic < ColumnCount - 1)
{
sw.Write(FileDelimiter);
}
}
sw.Write(sw.NewLine);
// Write All Rows to the File
foreach (DataRow dr in d_table.Rows)
{
for (int ir = 0; ir < ColumnCount; ir++)
{
if (!Convert.IsDBNull(dr[ir]))
{
sw.Write(TextQualifier + dr[ir].ToString() + TextQualifier);
}
if (ir < ColumnCount - 1)
{
sw.Write(FileDelimiter);
}
}
sw.Write(sw.NewLine);
}
sw.Close();
Dts.TaskResult = (int)ScriptResults.Success;
}
catch (Exception exception)
{
// Create Log File for Errors
//using (StreamWriter sw = File.CreateText(Dts.Variables["User::LogFolder"].Value.ToString() + "\\" +
// "ErrorLog_" + datetime + ".log"))
//{
// sw.WriteLine(exception.ToString());
//}
Dts.TaskResult = (int)ScriptResults.Failure;
throw;
}
Dts.TaskResult = (int)ScriptResults.Success;

Resources