JDBC Batch insert into Oracle Not working - oracle

I'm using JDBC's batch to inserting a million of rows. I was faced with that Oracle driver doesn't work as expected - batch insert takes a long time to work.
I have decided to sniff application's traffic with Wireshark. And what did I see?
Oracle JDBC driver sent first request (1)
then it sending data (2), about 2500 rows
oracle server responds with some package (3)
now all remain data will be send with one-by-one inserts, not batching!
insert into my_table...
insert into my_table...
Why does this happen? How can I fix this?
Table
create table my_table (val number);
Code
import java.math.BigDecimal;
import java.sql.Connection;
import java.sql.PreparedStatement;
import java.sql.SQLException;
public class scratch_1 {
#Test
public void foo() throws SQLException {
String sql = "insert into my_table (val) values (?)";
try (Connection con = getConnection()) {
con.setAutoCommit(false);
try (PreparedStatement ps = con.prepareStatement(sql)) {
for (long i = 0; i < 100_000; i++) {
ps.setBigDecimal(1, BigDecimal.valueOf(i));
ps.addBatch();
}
ps.executeBatch();
ps.clearBatch();
}
con.commit();
}
}
private Connection getConnection() throws SQLException {
String url = "jdbc:oracle:thin:#localhost:1521:orcl";
String user = "my_user";
String password = "my_password";
return java.sql.DriverManager.getConnection(url, user, password);
}
}
Wireshark code to illustrate what is happened:
Environment
$ java -version
java version "1.8.0_181"
Java(TM) SE Runtime Environment (build 1.8.0_181-b13)
Java HotSpot(TM) 64-Bit Server VM (build 25.181-b13, mixed mode)
Oracle Database 12.2.0.1 JDBC Driver
Server: Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit
Running query multiple times does not help - same result.
250k rows "batch" inserted in 465s
At the Server side v$sql:
SELECT *
FROM
(SELECT REGEXP_SUBSTR (sql_text, 'insert into [^\(]*') sql_text,
sql_id,
TRUNC(
CASE
WHEN SUM (executions) > 0
THEN SUM (rows_processed) / SUM (executions)
END,2) rows_per_execution
FROM v$sql
WHERE parsing_schema_name = 'MY_SCHEMA'
AND sql_text LIKE 'insert into%'
GROUP BY sql_text,
sql_id
)
ORDER BY rows_per_execution ASC;

Problem is solved
Thank you for all your responses. I'm very grateful to you!
My previous example doesn't describe real problem. Sorry that did not give the whole picture at once.
I simplified it to such a state that I lost processing of null values.
Check please example above I have updated it.
If I use java.sql.Types.NULL Oracle JDBC driver used theVarcharNullBinder for null values - it somehow leads to such a strange work. I think that Driver is used batch until first null with not specified type, after null it is fallback to one-by-one insert.
After change it to java.sql.Types.NUMERIC for number column driver used theVarnumNullBinder and correctly work with it - fully batching.
Code
import java.math.BigDecimal;
import java.sql.Connection;
import java.sql.PreparedStatement;
import java.sql.SQLException;
public class scratch_1 {
#Test
public void foo() throws SQLException {
String sql = "insert into my_table (val) values (?)";
try (Connection con = getConnection()) {
con.setAutoCommit(false);
try (PreparedStatement ps = con.prepareStatement(sql)) {
for (long i = 0; i < 100_000; i++) {
if (i % 2 == 0) {
//the real problem was here:
//ps.setNull(1, Types.NULL); //wrong way!
ps.setNull(1, Types.NUMERIC); //correct
} else {
ps.setBigDecimal(1, BigDecimal.valueOf(i));
}
ps.addBatch();
}
ps.executeBatch();
ps.clearBatch();
}
con.commit();
}
}
private Connection getConnection() throws SQLException {
String url = "jdbc:oracle:thin:#localhost:1521:orcl";
String user = "my_user";
String password = "my_password";
return java.sql.DriverManager.getConnection(url, user, password);
}
}

I am not sure as to where this limit comes from. However, Oracle JDBC Developer's Guide gives this recommendation:
Oracle recommends to keep the batch sizes in the range of 100 or less. Larger batches provide little or no performance improvement and may actually reduce performance due to the client resources required to handle the large batch.
Of course larger batch sizes may be used but they do not necessarily increase the performance as you've witnessed. One should use the batch size which is optimal for the use case and JDBC driver/DB used. You probably should use batches of 2500 in your case to see the best performance benefit.

Related

Query Oracle using .net core throws exception for Connect Identifier

I'm quite new to Oracle and never used that before, now, I'm trying to query an Oracle database from a .Net Core Web Application having nuget package oracle.manageddataaccess.core installed and using an alias as the Data Source but I'm receiving the following error:
If I use the full connection string the query will work correctly
ORA-12154: TNS:could not resolve the connect identifier specified
at OracleInternal.Network.AddressResolution..ctor(String TNSAlias, SqlNetOraConfig SNOConfig, Hashtable ObTnsHT, String instanceName, ConnectionOption CO)
at OracleInternal.Network.OracleCommunication.Resolve(String tnsAlias, ConnectionOption& CO)
at OracleInternal.ConnectionPool.PoolManager`3.ResolveTnsAlias(ConnectionString cs, Object OC)
at OracleInternal.ServiceObjects.OracleConnectionImpl.Connect(ConnectionString cs, Boolean bOpenEndUserSession, OracleConnection connRefForCriteria, String instanceName)
So, from a few links I could understand that there is a tsnnames.ora file which must contain the map between connect identifiers and connect descriptors. And that this file can be found at the machine on which the Oracle has been installed with the path ORACLE_HOME\network\admin.
Question is:
Does the alias name that I'm using in my connection string which reads as Data Source: <alias_name>; User ID=<user>; Password=<password> need to be specified in the tsnnames.ora file? I don't have access to the machine where the Oracle database resides on otherwise I would have checked it earlier.
Here's the code snippet for more info: connection string and query values are mocked out
public static string Read()
{
const string connectionString = "Data Source=TestData;User ID=User1;Password=Pass1";
const string query = "select xyz from myTable";
string result;
using (var connection = new OracleConnection(connectionString))
{
try
{
connection.Open();
var command = new OracleCommand(query) { Connection = connection, CommandType = CommandType.Text };
OracleDataReader reader = command.ExecuteReader();
reader.Read();
result = reader.GetString(0);
}
catch (Exception exception)
{
Console.WriteLine(exception);
throw;
}
}
return result;
}
Is this enough or something else needs to be added/changed in here? Or probably the issue is with the tsnNames.ora file which might not contain the alias name here?
Thanks in advance
For the Data source
Data Source=TestData;User Id=myUsername;Password=myPassword;
Your tnsnames.ora probably should have the below entry
TestData=(DESCRIPTION=
(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=MyHost) (PORT=MyPort)))
(CONNECT_DATA=(SERVER=DEDICATED)(SERVICE_NAME=MyOracleSID)))
Since
Data Source=TestData;User Id=myUsername;Password=myPassword;
is same as
Data Source=(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=MyHost) (PORT=MyPort)))(CONNECT_DATA=(SERVER=DEDICATED)(SERVICE_NAME=MyOracleSID)));User Id=myUsername;Password=myPassword;

JDBC statement pooling with DB2 does not have significant time difference

I'm using JDBC db2 driver, a.k.a. JT400 to connect to db2 server on Application System/400, a midrange computer system.
My goal is to insert into three Tables, from outside of IBM mainframe, which would be cloud instance(eg. Amazon WS).
To make the performance better
1) I am using already established connections to connect to db2.
(using org.apache.commons.dbcp.BasicDataSource or com.ibm.as400.access.AS400JDBCManagedConnectionPoolDataSource, both are working fine.)
public class AS400JDBCManagedConnectionPoolDataSource extends AS400JDBCManagedDataSource implements ConnectionPoolDataSource, Referenceable, Serializable {
}
public class AS400JDBCManagedDataSource extends ToolboxWrapper implements DataSource, Referenceable, Serializable, Cloneable {
}
2) I want to cache the insert into statements for all three tables, so that I don't have to send query every time and compile every time, which is expensive. I would instead just pass the parameters only. (Obviously I am doing this using JDBC prepared statements)
Based on an official IBM document Optimize Access to DB2 for i5/OS
from Java and WebSphere, page 17-20 - Enabling Extended Dynamic Support, it's possible to cache the statement with AS400JDBCManagedConnectionPoolDataSource.
BUT, the problem is the insert into queries are being compiled each time, which is taking 200ms * 3 queries = 600ms each time.
Example I'm using,
public class CustomerOrderEventHandler extends MultiEventHandler {
private static Logger logger = LogManager.getLogger(CustomerOrderEventHandler.class);
//private BasicDataSource establishedConnections = new BasicDataSource();
//private DB2SimpleDataSource nativeEstablishedConnections = new DB2SimpleDataSource();
private AS400JDBCManagedConnectionPoolDataSource dynamicEstablishedConnections =
new AS400JDBCManagedConnectionPoolDataSource();
private State3 orderState3;
private State2 orderState2;
private State1 orderState1;
public CustomerOrderEventHandler() throws SQLException {
dynamicEstablishedConnections.setServerName(State.server);
dynamicEstablishedConnections.setDatabaseName(State.DATABASE);
dynamicEstablishedConnections.setUser(State.user);
dynamicEstablishedConnections.setPassword(State.password);
dynamicEstablishedConnections.setSavePasswordWhenSerialized(true);
dynamicEstablishedConnections.setPrompt(false);
dynamicEstablishedConnections.setMinPoolSize(3);
dynamicEstablishedConnections.setInitialPoolSize(5);
dynamicEstablishedConnections.setMaxPoolSize(50);
dynamicEstablishedConnections.setExtendedDynamic(true);
Connection connection = dynamicEstablishedConnections.getConnection();
connection.close();
}
public void onEvent(CustomerOrder orderEvent){
long start = System.currentTimeMillis();
Connection dbConnection = null;
try {
dbConnection = dynamicEstablishedConnections.getConnection();
long connectionSetupTime = System.currentTimeMillis() - start;
state3 = new State3(dbConnection);
state2 = new State2(dbConnection);
state1 = new State1(dbConnection);
long initialisation = System.currentTimeMillis() - start - connectionSetupTime;
int[] state3Result = state3.apply(orderEvent);
int[] state2Result = state2.apply(orderEvent);
long state1Result = state1.apply(orderEvent);
dbConnection.commit();
logger.info("eventId="+ getEventId(orderEvent) +
",connectionSetupTime=" + connectionSetupTime +
",queryPreCompilation=" + initialisation +
",insertionOnlyTimeTaken=" +
(System.currentTimeMillis() - (start + connectionSetupTime + initialisation)) +
",insertionTotalTimeTaken=" + (System.currentTimeMillis() - start));
} catch (SQLException e) {
logger.error("Error updating the order states.", e);
if(dbConnection != null) {
try {
dbConnection.rollback();
} catch (SQLException e1) {
logger.error("Error rolling back the state.", e1);
}
}
throw new CustomerOrderEventHandlerRuntimeException("Error updating the customer order states.", e);
}
}
private Long getEventId(CustomerOrder order) {
return Long.valueOf(order.getMessageHeader().getCorrelationId());
}
}
And the States with insert commands look like below,
import org.apache.logging.log4j.LogManager;
import org.apache.logging.log4j.Logger;
import java.sql.Connection;
import java.sql.PreparedStatement;
import java.sql.SQLException;
public class State2 extends State {
private static Logger logger = LogManager.getLogger(DetailState.class);
Connection connection;
PreparedStatement preparedStatement;
String detailsCompiledQuery = "INSERT INTO " + DATABASE + "." + getStateName() +
"(" + DetailState.EVENT_ID + ", " +
State2.ORDER_NUMBER + ", " +
State2.SKU_ID + ", " +
State2.SKU_ORDERED_QTY + ") VALUES(?, ?, ?, ?)";
public State2(Connection connection) throws SQLException {
this.connection = connection;
this.preparedStatement = this.connection.prepareStatement(detailsCompiledQuery); // this is taking ~200ms each time
this.preparedStatement.setPoolable(true); //might not be required, not sure
}
public int[] apply(CustomerOrder event) throws StateException {
event.getMessageBody().getDetails().forEach(detail -> {
try {
preparedStatement.setLong(1, getEventId(event));
preparedStatement.setString(2, getOrderNo(event));
preparedStatement.setInt(3, detail.getSkuId());
preparedStatement.setInt(4, detail.getQty());
preparedStatement.addBatch();
} catch (SQLException e) {
logger.error(e);
throw new StateException("Error setting up data", e);
}
});
long startedTime = System.currentTimeMillis();
int[] inserted = new int[0];
try {
inserted = preparedStatement.executeBatch();
} catch (SQLException e) {
throw new StateException("Error updating allocations data", e);
}
logger.info("eventId="+ getEventId(event) +
",state=details,insertionTimeTaken=" + (System.currentTimeMillis() - startedTime));
return inserted;
}
#Override
protected String getStateName() {
return properties.getProperty("state.order.details.name");
}
}
So the flow is each time an event is received(eg. CustomerOrder), it gets the establishedConnection and then asks the states to initialise their statements.
The metrics for timing look as below,
for the first event, it takes 580ms to create the preparedStatements for 3 tables.
{"timeMillis":1489982655836,"thread":"ScalaTest-run-running-CustomerOrderEventHandlerSpecs","level":"INFO","loggerName":"com.xyz.customerorder.events.handler.CustomerOrderEventHandler",
"message":"eventId=1489982654314,connectionSetupTime=1,queryPreCompilation=580,insertionOnlyTimeTaken=938,insertionTotalTimeTaken=1519","endOfBatch":false,"loggerFqcn":"org.apache.logging.log4j.spi.AbstractLogger","threadId":1,"threadPriority":5}
for the second event, takes 470ms to prepare the statements for 3 tables, which is less than the first event but just < 100ms, I assume it to be drastically less, as it should not even make it to compilation.
{"timeMillis":1489982667243,"thread":"ScalaTest-run-running-PurchaseOrderEventHandlerSpecs","level":"INFO","loggerName":"com.xyz.customerorder.events.handler.CustomerOrderEventHandler",
"message":"eventId=1489982665456,connectionSetupTime=0,queryPreCompilation=417,insertionOnlyTimeTaken=1363,insertionTotalTimeTaken=1780","endOfBatch":false,"loggerFqcn":"org.apache.logging.log4j.spi.AbstractLogger","threadId":1,"threadPriority":5}
What I'm thinking is since I'm closing preparedStatement for that particular connection, it does not even exist for new connection. If thats the case whats the point of having statement caching at all in multi-threaded environment.
The documentation has similar example, where its making transactions inside the same connection which is not the case for me, as I need to have multiple connections at the same time.
Questions
Primary
Q1) Is DB2 JDBC driver caching the statements at all, between multiple connections? Because I don't see much difference while preparing the statement. (see example, first one takes ~600ms, second one takes ~500ms)
References
ODP = Open Data Path
SQL packages
SQL packages are permanent objects used to store information related
to prepared SQL statements. They can be used by the IBM iSeries Access
for the IBM Toolbox for
Java JDBC driver. They are also used by applications which use the
QSQPRCED (SQL Process Extended Dynamic) API interface.
In the case JDBC, the existence of the SQL package is
checked when the client application issues the first prepare of a SQL
Statement. If the package does not exist, it is created at that time
(even though it may not yet contain any SQL statements)
Tomcat jdbc connection pool configuration - DB2 on iSeries(AS400)
IBM Data Server Driver for JDBC and SQLJ statement caching
A couple of important things to note regarding statement caching:
Because Statement objects are child objects of a given Connection, once the Connection is closed all child objects (e.g. all Statement objects) must also be closed.
It is not possible to associate a statement from one connection with a different connection.
Statement pooling may or may not be done be by a given JDBC driver. Statement pooling may also be performed by a connection management layer (i.e. application server)
Per JDBC spec, default value for Statement.isPoolable() == false and PreparedStatement.isPoolable() == true, however this flag is only a hint to the JDBC driver. There is no guarantee from the spec that statement pooling will occur.
First off, I am not sure if the JT400 driver does statement caching. The document you referenced in your question comment, Optimize Access to DB2 for i5/OS from Java and WebSphere, is specific to using the JT400 JDBC driver with WebSphere application server, and on slide #3 it indicates that statement caching comes from the WebSphere connection management layer, not the native JDBC driver layer. Given that, I'm going to assume that the JT400 JDBC driver doesn't support statement caching on its own.
So at this point you are probably going to want to plug into some sort of app server (unless you want to implement statement caching on your own, which is sort of re-inventing the wheel). I know for sure that both WebSphere Application Server products (traditional and Liberty) support statement caching for any JDBC driver.
For WebSphere Liberty (the newer product), the data source config is the following:
<dataSource jndiName="jdbc/myDS" statementCacheSize="10">
<jdbcDriver libraryRef="DB2iToolboxLib"/>
<properties.db2.i.toolbox databaseName="YOURDB" serverName="localhost"/>
</dataSource>
<library id="DB2iToolboxLib">
<fileset dir="/path/to/jdbc/driver/dir" includes="jt400.jar"/>
</library>
The key bit being the statementCacheSize attribute of <dataSource>, which has a default value of 10.
(Disclaimer, I'm a WebSphere dev, so I'm going to talk about what I know)
First off, the IBM i Java documentation is here: IBM Toolbox for Java
Secondly, I don't see where you are setting the "extended dynamic" property to true which provides
a mechanism for caching dynamic SQL statements on the server. The first
time a particular SQL statement is prepared, it is stored in a SQL
package on the server. If the package does not exist, it is
automatically created. On subsequent prepares of the same SQL
statement, the server can skip a significant part of the processing by
using information stored in the SQL package. If this is set to "true",
then a package name must be set using the "package" property.
I think you're missing some steps in using the managed pool...here's the first example in the IBM docs
import javax.naming.Context;
import javax.naming.InitialContext;
import javax.sql.DataSource;
import com.ibm.as400.access.AS400JDBCManagedConnectionPoolDataSource;
import com.ibm.as400.access.AS400JDBCManagedDataSource;
public class TestJDBCConnPoolSnippet
{
void test()
{
AS400JDBCManagedConnectionPoolDataSource cpds0 = new AS400JDBCManagedConnectionPoolDataSource();
// Set general datasource properties. Note that both connection pool datasource (CPDS) and managed
// datasource (MDS) have these properties, and they might have different values.
cpds0.setServerName(host);
cpds0.setDatabaseName(host);//iasp can be here
cpds0.setUser(userid);
cpds0.setPassword(password);
cpds0.setSavePasswordWhenSerialized(true);
// Set connection pooling-specific properties.
cpds0.setInitialPoolSize(initialPoolSize_);
cpds0.setMinPoolSize(minPoolSize_);
cpds0.setMaxPoolSize(maxPoolSize_);
cpds0.setMaxLifetime((int)(maxLifetime_/1000)); // convert to seconds
cpds0.setMaxIdleTime((int)(maxIdleTime_/1000)); // convert to seconds
cpds0.setPropertyCycle((int)(propertyCycle_/1000)); // convert to seconds
//cpds0.setReuseConnections(false); // do not re-use connections
// Set the initial context factory to use.
System.setProperty(Context.INITIAL_CONTEXT_FACTORY, "com.sun.jndi.fscontext.RefFSContextFactory");
// Get the JNDI Initial Context.
Context ctx = new InitialContext();
// Note: The following is an alternative way to set context properties locally:
// Properties env = new Properties();
// env.put(Context.INITIAL_CONTEXT_FACTORY, "com.sun.jndi.fscontext.RefFSContextFactory");
// Context ctx = new InitialContext(env);
ctx.rebind("mydatasource", cpds0); // We can now do lookups on cpds, by the name "mydatasource".
// Create a standard DataSource object that references it.
AS400JDBCManagedDataSource mds0 = new AS400JDBCManagedDataSource();
mds0.setDescription("DataSource supporting connection pooling");
mds0.setDataSourceName("mydatasource");
ctx.rebind("ConnectionPoolingDataSource", mds0);
DataSource dataSource_ = (DataSource)ctx.lookup("ConnectionPoolingDataSource");
AS400JDBCManagedDataSource mds_ = (AS400JDBCManagedDataSource)dataSource_;
boolean isHealthy = mds_.checkPoolHealth(false); //check pool health
Connection c = dataSource_.getConnection();
}
}

Jmeter does not see the variables when trying to connect to the database

I have the following problem.
When you try to create a connection can not find variables.
The test has the following set of actions
Download the local settings file (put in props)
Create a database connection (This happens all in different groups, i tried in the same)
Use next code for upload local property file(Bean Shell, Thread Group 1)
FileInputStream is = new FileInputStream(new File("d:/somefolder/somefile.properties"));
props.load(is);
is.close();
Before creating the connection I checked the availability of variables(Bean Shell, Thread Group 2)
System.out.println(props.get("db.url"));
System.out.println(${__P("db.url")});
${__setProperty("db.url", props.get("db.url"))};
System.out.println(${__P("db.url")});
OutPut
correct connection url
1(Because function __P return default value if variable undefined,
default value = 1)
correct connection url
Create Jdbc Connection with next parametrs(Thread Group 2)
url: ${__P("db.url")}
Test Failure because ${__P("db.url")} return 1
If i use ${__BeanShell(props.get(db.url))}
Test Failure because props.get(db.url) return nothing
If i use ${__javaScript(props.get(db.url))}
Test Failure because props.get(db.url) return nothing
"Component jdbc connection configuration" initialized before started first Thread Group, so component don't see variables because he don't initialize.
Create Script for connecting to db
import java.sql.Connection;
import java.sql.DriverManager;
import java.sql.ResultSet;
import java.sql.SQLException;
import java.sql.Statement;
import java.sql.ResultSetMetaData;
ResultSet clientConfigs = null;
ResultSet gameIds = null;
Connection connect = null;
Statement statement = null;
try {
Class.forName(props.get("configdb.driverClassName"));
String connectionUrl = props.get("configdb.url") + "?user=" + props.get("configdb.username") + "&password=" + props.get("configdb.password");
connect = DriverManager.getConnection(connectionUrl);
statement = connect.createStatement();
clientConfigs = statement.executeQuery("Select keyname,valuestr from client_config WHERE valuestr Like '%/_ah/api/%'");
ResultSetMetaData ccMetaData = clientConfigs.getMetaData();
int clientConfigsColumns = ccMetaData.getColumnCount();
ArrayList clientConfigsList = new ArrayList(20);
while (clientConfigs.next()){
HashMap clientConfigsRow = new HashMap(clientConfigsColumns);
for(int i=1; i<=clientConfigsColumns; ++i){
clientConfigsRow.put(ccMetaData.getColumnName(i), clientConfigs.getObject(i));
}
clientConfigsList.add(clientConfigsRow);
}
vars.putObject("clientConfigs", clientConfigsList);
gameIds = statement.executeQuery("Select gameId from game_config");
ResultSetMetaData giMetaData = gameIds.getMetaData();
int gameIdsColumns = giMetaData.getColumnCount();
ArrayList gameIdsList = new ArrayList(50);
while (gameIds.next()){
HashMap gameIdsRow = new HashMap(gameIdsColumns);
for(int i=1; i<=gameIdsColumns; ++i){
gameIdsRow.put(giMetaData.getColumnName(i), gameIds.getObject(i));
}
gameIdsList.add(gameIdsRow);
}
vars.putObject("gameIds", gameIdsList);
}
catch (Exception e) {
throw e;
} finally {
try {
if (clientConfigs != null) {
clientConfigs.close();
}
if (gameIds != null) {
gameIds.close();
}
if (statement != null) {
statement.close();
}
if (connect != null) {
connect.close();
}
} catch (Exception e) {
}
}
Apparently JMeter loads JDBC configuration parameters at really early, likely at UI load. See discussion here
The JDBC Config element is only processed at test startup, so it is
not possible to change the values once a test has started - i.e. you
could not change the values for different loops of the test plan. The
test plan has to know the JDBC settings near the start.
Which means, if you change value of property in threadGroup-1 at runtime, the changes will not take effect in JDBC config.
One possible way around this is to set the property from the command line while starting JMeter.
Alternatively drop JDBC sampler entirely and use one of the custom samplers to interact with your JDBC data source.

Oracle database change notification with ODP.NET doesn't work

I'm complete newbie to Oracle DB trying to enable DB change notifications.
private void RegisterNotification()
{
const string connstring = "Data Source=ORA_DB;User Id=USER;Password=pass;";
try
{
var connObj = new OracleConnection(connstring);
connObj.Open();
var cmdObj = connObj.CreateCommand();
cmdObj.CommandText = "SELECT * FROM MYTABLE";
var dep = new OracleDependency(cmdObj);
dep.QueryBasedNotification = false;
dep.OnChange += new OnChangeEventHandler(OnNotificationReceived);
cmdObj.ExecuteNonQuery();
connObj.Close();
connObj.Dispose();
connObj = null;
}
catch (Exception ex)
{
MessageBox.Show(ex.ToString());
}
}
public static void OnNotificationReceived(object src, OracleNotificationEventArgs arg)
{
MessageBox.Show("Table has changed!");
}
I've executed "GRANT CHANGE NOTIFICATION TO USER;" but nothing happens when I change the table data neither manually nor programmatically. Query-based notifications also don't work. I suppose I miss something in Oracle configuration.
I have Oracle 11.2 standard edition.
The CHANGE NOTIFICATION permission is not on the features of the Standard Edition for latest versions :
Licensing information
Oracle TimesTen Application-Tier Database Cache :
Data access using PL/SQL, JDBC, ODBC, ttClasses, OCI, and Pro*C/C++
interfaces Transaction Log API (XLA) for change notification
Multi-node Cache Grid
...
SE2 : N
EE : Y (extra-cost option)
Try to execute your soft under admin permission and as console application. when we used to deal with it we faced with the same stuff. we hadn't managed to use it in webservices.

MVC-Mini-Profiler falsely showing duplicate queries

I have been playing around with MVC-Mini-Profiler, and found it very useful. However, on all pages I trace on, I get reports of duplicate queries, like the one below.
However, I have traced the queries in SQL Server Profiler, and there is not doubt it only hits the DB once.
Am I missing a concept here or have I set it up the wrong way? I have searched high and low for people with similar problems, with no luck, so I doubt there is a bug.
http://localhost:27941/clubs
T+175.2 ms
Reader
13.6 ms
utePageHierarchy Execute System.Collections.Generic.IEnumerable<T>.GetEnumerator GetResults Execute ExecuteStoreCommands
SELECT
[Extent1].[TeamId] AS [TeamId],
[Extent1].[Title] AS [Title],
[Extent1].[TitleShort] AS [TitleShort],
[Extent1].[LogoImageId] AS [LogoImageId],
[Extent1].[Slug] AS [Slug],
(SELECT
COUNT(1) AS [A1]
FROM [dbo].[Athletes] AS [Extent2]
WHERE [Extent1].[TeamId] = [Extent2].[TeamId]) AS [C1]
FROM [dbo].[Teams] AS [Extent1]
WHERE 352 = [Extent1].[CountryId]
http://localhost:27941/clubs
T+175.4 ms
DUPLICATE Reader
13.4 ms
utePageHierarchy Execute System.Collections.Generic.IEnumerable<T>.GetEnumerator GetResults Execute ExecuteStoreCommands
SELECT
[Extent1].[TeamId] AS [TeamId],
[Extent1].[Title] AS [Title],
[Extent1].[TitleShort] AS [TitleShort],
[Extent1].[LogoImageId] AS [LogoImageId],
[Extent1].[Slug] AS [Slug],
(SELECT
COUNT(1) AS [A1]
FROM [dbo].[Athletes] AS [Extent2]
WHERE [Extent1].[TeamId] = [Extent2].[TeamId]) AS [C1]
FROM [dbo].[Teams] AS [Extent1]
WHERE 352 = [Extent1].[CountryId
I use EF4 and have implemented the context like this:
public class BaseController : Controller
{
public ResultsDBEntities _db;
public BaseController()
{
var rootconn = ProfiledDbConnection.Get(GetStoreConnection(ConfigurationManager.ConnectionStrings["ResultsDBEntities"].ConnectionString));
var conn = ProfiledDbConnection.Get(rootconn);
_db = ObjectContextUtils.CreateObjectContext<ResultsDBEntities>(conn);
}
public static DbConnection GetStoreConnection<T>() where T : System.Data.Objects.ObjectContext
{
return GetStoreConnection("name=" + typeof(T).Name);
}
public static DbConnection GetStoreConnection(string entityConnectionString)
{
DbConnection storeConnection;
// Let entity framework do the heavy-lifting to create the connection.
using (var connection = new EntityConnection(entityConnectionString))
{
// Steal the connection that EF created.
storeConnection = connection.StoreConnection;
// Make EF forget about the connection that we stole (HACK!)
connection.GetType().GetField("_storeConnection",
BindingFlags.NonPublic | BindingFlags.Instance).SetValue(connection, null);
// Return our shiny, new connection.
return storeConnection;
}
}
}
I reported this to the Mini Profiler team (http://code.google.com/p/mvc-mini-profiler/issues/detail?id=62&can=1) and they've issued a patch today which appears to fix the issue.
I imagine this will be included in the next release. Hope that helps :)

Resources