mirth connect use of executeUpdateAndGetGeneratedKeys with Oracle - oracle

I am using Mirth Connect 3.5.0.8232. I have created a persisted connection to an Oracle database and using it throughout my source and destination connectors. One of the methods Mirth provides for talking with the database is executeUpdateAndGetGeneratedKeys. It would be quite useful for insert statements that would return the primary keys for the inserted rows.
My question is - how do you specify WHICH columns to return? Running the provided function works, but returns ROWID in the CachedRowSet, which is not what I want.
As far as I understood, which columns to return depends on the type of the database, and every database behaves differently. I am interested in Oracle specifically.
Thank you.

The executeUpdateAndGetGeneratedKeys method uses the Statement.RETURN_GENERATED_KEYS flag to signal to the driver that auto-generated keys should be returned. However, from the Oracle docs:
If key columns are not explicitly indicated, then Oracle JDBC drivers cannot identify which columns need to be retrieved. When a column name or column index array is used, Oracle JDBC drivers can identify which columns contain auto-generated keys that you want to retrieve. However, when the Statement.RETURN_GENERATED_KEYS integer flag is used, Oracle JDBC drivers cannot identify these columns. When the integer flag is used to indicate that auto-generated keys are to be returned, the ROWID pseudo column is returned as key. The ROWID can be then fetched from the ResultSet object and can be used to retrieved other columns.
So instead, try using their suggestion of passing in a column name array to prepareStatement:
var dbConn;
try {
dbConn = DatabaseConnectionFactory.createDatabaseConnection('oracle.jdbc.driver.OracleDriver','jdbc:oracle:thin:#localhost:1521:DBNAME','user','pass');
// Create a Java String array directly
var keyColumns = java.lang.reflect.Array.newInstance(java.lang.String, 1);
keyColumns[0] = 'id';
var ps = dbConn.getConnection().prepareStatement('INSERT INTO tablename (columnname) VALUES (?)', keyColumns);
try {
// Set variables here
ps.setObject(1, 'test');
ps.executeUpdate();
var result = ps.getGeneratedKeys();
result.next();
var generatedKey = result.getObject(1);
logger.info(generatedKey);
} finally {
ps.close();
}
} finally {
if (dbConn) {
dbConn.close();
}
}

Related

Getting max value on server (Entity Framework)

I'm using EF Core but I'm not really an expert with it, especially when it comes to details like querying tables in a performant manner...
So what I try to do is simply get the max-value of one column from a table with filtered data.
What I have so far is this:
protected override void ReadExistingDBEntry()
{
using Model.ResultContext db = new();
// Filter Tabledata to the Rows relevant to us. the whole Table may contain 0 rows or millions of them
IQueryable<Measurement> dbMeasuringsExisting = db.Measurements
.Where(meas => meas.MeasuringInstanceGuid == Globals.MeasProgInstance.Guid
&& meas.MachineId == DBMatchingItem.Id);
if (dbMeasuringsExisting.Any())
{
// the max value we're interested in. Still dbMeasuringsExisting could contain millions of rows
iMaxMessID = dbMeasuringsExisting.Max(meas => meas.MessID);
}
}
The equivalent SQL to what I want would be something like this.
select max(MessID)
from Measurement
where MeasuringInstanceGuid = Globals.MeasProgInstance.Guid
and MachineId = DBMatchingItem.Id;
While the above code works (it returns the correct value), I think it has a performance issue when the database table is getting larger, because the max filtering is done at the client-side after all rows are transferred, or am I wrong here?
How to do it better? I want the database server to filter my data. Of course I don't want any SQL script ;-)
This can be addressed by typing the return as nullable so that you do not get a returned error and then applying a default value for the int. Alternatively, you can just assign it to a nullable int. Note, the assumption here of an integer return type of the ID. The same principal would apply to a Guid as well.
int MaxMessID = dbMeasuringsExisting.Max(p => (int?)p.MessID) ?? 0;
There is no need for the Any() statement as that causes an additional trip to the database which is not desirable in this case.

ADO.NET - Data Adapter Fill Method - Fill Dataset with rows modified in SQL

I am using ADO.NET with Data Adaptor to Fill a Dataset in my .NET Core 3.1 Project.
The first run for the Fill method occurs when my program initially starts so I have an in memeory cache to start using with my business/program logic. When I then make any changes to the tables using EF Core, once the changes have been saved I then run the Data Adapter Fill method to re-populate the Dataset with the updates from the tables that were modified in SQL through EF Core..
Reading various docs for a number of days now, what I'm unclear about is whether the Data Adapter Fill method overwrites all of the existing table rows in the Dataset each time the fill method is called? i.e if I'm loading a dataset with a table from SQL that has 10k rows, is it going to overwrite all 10k rows that exist in the dataset, even if 99% of the rows have not changed?
The reason I am going down the Dataset route is that I want to keep and in memory cache of the various tables from SQL so I can query the data as fast as possible without raising queries SQL all the time.
The solution I want is something along the lines of Data Adaptor Fill method, but I don't want the Dataset to be overwritten for any rows that had not been modified in SQL since the last run.
Is this how things are working already? or do I have to look for another solution?
Below just an example of the Adaptor Fill method.
public async Task<AdoNetResult> FillAlarmsDataSet()
{
string connectionString = _config.GetConnectionString("DefaultConnection");
try
{
string cmdText1 = "SELECT * FROM [dbo].[Alarm] ORDER BY Id;" +
"SELECT * FROM [dbo].[AlarmApplicationRole] ORDER BY Id;";
dataAdapter = new SqlDataAdapter(cmdText1, connectionString);
// Create table mappings
dataAdapter.TableMappings.Add("Alarm", "Alarm");
dataAdapter.TableMappings.Add("AlarmApplicationRole", "AlarmApplicationRole");
alarmDataSet = new DataSet
{
Locale = CultureInfo.InvariantCulture
};
// Create and fill the DataSet
await Task.Run(() => dataAdapter.Fill(alarmDataSet));
return AdoNetResult.Success;
}
catch (Exception ex)
{
// Return the task with details of the exception
return AdoNetResult.Failed(ex);
}
}

Can I control how Oracle maps the integer types in ADO.NET?

I've got a legacy database that was created with the database type INTEGER for many (1.000+) Oracle columns. A database with the same structure exists for MS SQL. As I was told, the original definition was created using a tool that generated the scripts from a logical model to the specific one for MS SQL and Oracle.
Using C++ and MFC the columns were mapped nicely to the integer type for both DBMs.
I am porting this application to .NET and C#. The same C# codebase is used to access both MS SQL and Oracle. We use the same DataSets and logic and we need the same types (int32 for both).
The ODP.NET driver from Oracle maps them to Decimal. This is logical as Oracle created the integer columns as NUMBER(37) automatically. The columns in MS SQL map to int32.
Can I somehow control how to map the types in the ODP.NET driver? I would like to say something like "map NUMBER(37) to int32". The columns will never hold values bigger than the limits of an int32. We know this because it is being used in the MS SQL version.
Alternatively, can I modify all columns from NUMBER(37) to NUMBER(8) or SIMPLE_INTEGER so that they map to the right type for us? Many of these columns are used as primary keys (think autoincrement).
Regarding type mapping, hope this is what you need
http://docs.oracle.com/cd/E51173_01/win.122/e17732/entityDataTypeMapping.htm#ODPNT8300
Regarding type change, if table is empty, you may use following script (just replace [YOUR_TABLE_NAME] with table name in upper case):
DECLARE
v_table_name CONSTANT VARCHAR2(30) := '[YOUR_TABLE_NAME]';
BEGIN
FOR col IN (SELECT * FROM user_tab_columns WHERE table_name = v_table_name AND data_type = 'NUMBER' AND data_length = 37)
LOOP
EXECUTE IMMEDIATE 'ALTER TABLE '||v_table_name||' MODIFY '||col.column_name||' NUMBER(8)';
END LOOP;
END;
If some of these columns are not empty, then you can't decrease precision for them
If you have not too much data, you may move it to temp table
create table temp_table as select * from [YOUR_TABLE_NAME]
then truncate original table
truncate [YOUR_TABLE_NAME]
then run script above
then move data back
insert /*+ append */ into [YOUR_TABLE_NAME] select * from temp_table
commit
If data amount is substantial it is better to move it once. In such case it is faster to create new table with correct datatypes and all indexes, constraints and so on, then move data, then rename both tables to make new table have proper name.
Unfortunately the mapping of numeric types between .NET and Oracle is hardcoded in OracleDataReader class.
In general I usually prefer to setup appropriate data types in the database, so if possible I would change the column datatypes because they better represent the actual values and their constraints.
Another option is to wrap the tables using views casting to NUMBER(8) but will negatively impact execution plans because it prohibits index lookups.
Then you have also some application implementation options:
Implement your own data reader or subset of ADO.NET classes (inheriting from DbProviderFactory, DbConnection, DbCommmand, DbDataReader, etc. and wrapping Oracle classes), depending on how complex is your implementation. Oracle.DataAccess, Devart and all providers do exactly the same because it gives total control over everything including any magic with the data types. If the datatype conversion is the only thing you want to achieve, most of the implementation would be just calling wrapped class methods/properties.
If you have access to OracleDataReader after command is executed and before you start to read it you can do a simple hack and set the resulting numeric type using reflection (following implementation is just simplified demonstration).
However this will not work with ExecuteScalar as this method never exposes the underlying data reader.
var connection = new OracleConnection("DATA SOURCE=HQ_PDB_TCP;PASSWORD=oracle;USER ID=HUSQVIK");
connection.Open();
var command = connection.CreateCommand();
command.CommandText = "SELECT 1 FROM DUAL";
var reader = command.ExecuteDatabaseReader();
reader.Read();
Console.WriteLine(reader[0].GetType().FullName);
Console.WriteLine(reader.GetFieldType(0).FullName);
public static class DataReaderExtensions
{
private static readonly FieldInfo NumericAccessorField = typeof(OracleDataReader).GetField("m_dotNetNumericAccessor", BindingFlags.NonPublic | BindingFlags.Instance);
private static readonly object Int32DotNetNumericAccessor = Enum.Parse(typeof(OracleDataReader).Assembly.GetType("Oracle.DataAccess.Client.DotNetNumericAccessor"), "GetInt32");
private static readonly FieldInfo MetadataField = typeof(OracleDataReader).GetField("m_metaData", BindingFlags.NonPublic | BindingFlags.Instance);
private static readonly FieldInfo FieldTypesField = typeof(OracleDataReader).Assembly.GetType("Oracle.DataAccess.Client.MetaData").GetField("m_fieldTypes", BindingFlags.NonPublic | BindingFlags.Instance);
public static OracleDataReader ExecuteDatabaseReader(this OracleCommand command)
{
var reader = command.ExecuteReader();
var columnNumericAccessors = (IList)NumericAccessorField.GetValue(reader);
columnNumericAccessors[0] = Int32DotNetNumericAccessor;
var metadata = MetadataField.GetValue(reader);
var fieldTypes = (Type[])FieldTypesField.GetValue(metadata);
fieldTypes[0] = typeof(Int32);
return reader;
}
}
I implemented extension method for command execution returning the reader where I can set up the desired column numeric types. Without setting the numeric accessor (it's just internal enum Oracle.DataAccess.Client.DotNetNumericAccessor) you will get System.Decimal, with accessor set you get Int32. Using this you can get all Int16, Int32, Int64, Float or Double.
columnNumericAccessors index is a column index and it will applied only to numeric types, if column is DATE or VARCHAR the numeric accessor is just ignored. If your implementation doesn't expose the provider specific type, make the extension method on IDbCommand or DbCommand and then safe cast the DbDataReader to OracleDataReader.
EDIT: Added the hack for GetFieldType method. But it might happen that the static mapping hashtable might be updated so this could have unwanted effects. You need to test it properly. The fieldTypes array holds the types returned for all columns of the data reader.

how to delete multiple rows of data with linq to EF using DbContext

Within a project I have a database table with the following columns
I would like to be able to delete from this table all rows which have a matching SharingAgencyId and ReceivingAgencyId values that I can pass in.
What I have tried so far:
public static ICollection<SecurityDataShare> UpdateSharedEntites(long conAgency, long recAgency)
{
ICollection<SecurityDataShare> agShares = null;
try
{
using (var context = new ProjSecurityEntities(string.Empty))
{
agShares = (from a in context.SecurityDataShares
.Where(c => c.ReceivingAgencyId == recAgency && c.SharingAgencyId == conAgency)
select a);
}
}
catch (Exception ex)
{
//ToDo
throw;
}
}
My thought process was to retrieve the records where the id's matched the parameters passed in and then using a foreach loop iterate through (agShare) and remove each row followed by saving my changes. With the current implementation I don't seem to have access to any of the Delete methods.
Looking to the example above I'd appreciate any suggestions on how to remove the rows within the table that contained a value of 43 and 39 using dbContext.
Cheers
If I understand right, your DbContext's properties, like SecurityDataShares should be typed as IDbSet<SecurityDataShare>. If that's correct, you should be able to use this Remove method.
foreach(var agShare in agShares) {
context.SecurityDataShares.Remove(agShare);
}
context.SaveChanges();
Be aware that this creates a separate SQL statement for deleting these objects. If you expect the number of objects to be rather large, you may want to use a stored procedure instead.
Entity Framework doesn't make it easy to run a single command to delete multiple rows (that I know of). My preference is to run a SQL statement directly for multi-entity updates/deletes using native sql with the dbcontext of sorts.
you can also pass datatable to the stored procedure with database contain dynamic type table of your type and
use that table into stored procedure for deleting matching rows from Database table.

How to use variable mapping while using Oracle OLE DB provider in SSIS?

How to use variable mapping while using Oracle OLE DB provider? I have done the following:
Execute SQL Task: Full result set to hold results of the query.
Foreach ADO Enumerator: ADO object source above variable (Object data type).
Variable Mapping: 1 field.
The variable is setup as Evaluate as an Express (True)
Data Flow: SQL Command from variable, as SELECT columnName FROM table where columnName = ?
Basically what I am trying to do is use the results of a query from a SQL Server table, (ie ..account numbers) and pull records from Oracle reference the results from the SQL query
It feels like you're mixing items. The Parameterization ? is a placeholder for a variable which, in an OLE DB Source component, you'd click on the Parameters button and map.
However, since you're using the SQL Command from a Variables, that doesn't allow you to use the Parameterization option, probably because the risk of a user changing the shape of the result set, via Expressions, is too high.
So, pick one - either "SQL Command" with proper parametetization or "SQL Command from Variable" where you add in your parameters in terrible string building fashion like Dynamically assign value to variable in SSIS SQL Server 2005/2008/2008R2 people, be aware that you are limited to 4k characters in a string variable that uses Expressions.
Based on the comment of "Basically what I am trying to do is use the results of a query from a SQL Server table, (ie ..account numbers) and pull records from Oracle reference the results from the SQL query"
There's two ways of going about this. With what you've currently developed, my above answer still stands. You are shredding the account numbers and using those as the filter in your query to Oracle. This will issue a query to Oracle for each account number you have. That may or may not be desirable.
The upside to this approach is that it will allow you to retrieve multiple rows. Assuming you are pulling Sales Order type of information, one account number likely has many sales order rows.
However, if you are working with something that has a zero to one mapping with the account numbers, like account level data, then you can simplify the approach you are taking. Move your SQL Server query to an OLE DB Source component within your data flow.
Then, what you are looking for is the Lookup Component. That allows you to enrich an existing row of data with additional data. Here you will specify a query like "SELECT AllTheColumnsICareAbout, AccountNumber FROM schema.Table ". Then you will map the AccountNumber from the OLE DB Source to the one in the Lookup Component and the click the checkmark next to all the columns you want to augment the existing row with.
I believe what you are asking is how to use SSIS to push data to Oracle OleDb provider.
I will assume that Oracle is the destination. The idea of using data destinations with variable columns is not supported out of the box. You should be able to use the SSIS API or other means, I take a simpler approach.
I recently set up a package to get all tables from a database and create dynamic CSV output. One file for each table. You could do something similar.
Switch out the streamwriter part with a section to 1. Create the table in destination. 2. Insert records into Oracle. I am not sure if you will need to do single inserts to Oracle. In another project that works in reverse, dynamic csv into SQL. SInce I work with SQL server, I load a datatable and use SQLBulkCopy class to use bulk loading which provides excellent performance.
public void Main()
{
string datetime = DateTime.Now.ToString("yyyyMMddHHmmss");
try
{
string TableName = Dts.Variables["User::CurrentTable"].Value.ToString();
string FileDelimiter = ",";
string TextQualifier = "\"";
string FileExtension = ".csv";
//USE ADO.NET Connection from SSIS Package to get data from table
SqlConnection myADONETConnection = new SqlConnection();
myADONETConnection = (SqlConnection)(Dts.Connections["connection manager name"].AcquireConnection(Dts.Transaction) as SqlConnection);
//Read data from table or view to data table
string query = "Select * From [" + TableName + "]";
SqlCommand cmd = new SqlCommand(query, myADONETConnection);
//myADONETConnection.Open();
DataTable d_table = new DataTable();
d_table.Load(cmd.ExecuteReader());
//myADONETConnection.Close();
string FileFullPath = Dts.Variables["$Project::ExcelToCsvFolder"].Value.ToString() + "\\Output\\" + TableName + FileExtension;
StreamWriter sw = null;
sw = new StreamWriter(FileFullPath, false);
// Write the Header Row to File
int ColumnCount = d_table.Columns.Count;
for (int ic = 0; ic < ColumnCount; ic++)
{
sw.Write(TextQualifier + d_table.Columns[ic] + TextQualifier);
if (ic < ColumnCount - 1)
{
sw.Write(FileDelimiter);
}
}
sw.Write(sw.NewLine);
// Write All Rows to the File
foreach (DataRow dr in d_table.Rows)
{
for (int ir = 0; ir < ColumnCount; ir++)
{
if (!Convert.IsDBNull(dr[ir]))
{
sw.Write(TextQualifier + dr[ir].ToString() + TextQualifier);
}
if (ir < ColumnCount - 1)
{
sw.Write(FileDelimiter);
}
}
sw.Write(sw.NewLine);
}
sw.Close();
Dts.TaskResult = (int)ScriptResults.Success;
}
catch (Exception exception)
{
// Create Log File for Errors
//using (StreamWriter sw = File.CreateText(Dts.Variables["User::LogFolder"].Value.ToString() + "\\" +
// "ErrorLog_" + datetime + ".log"))
//{
// sw.WriteLine(exception.ToString());
//}
Dts.TaskResult = (int)ScriptResults.Failure;
throw;
}
Dts.TaskResult = (int)ScriptResults.Success;

Resources