MVC-Mini-Profiler falsely showing duplicate queries - asp.net-mvc-3

I have been playing around with MVC-Mini-Profiler, and found it very useful. However, on all pages I trace on, I get reports of duplicate queries, like the one below.
However, I have traced the queries in SQL Server Profiler, and there is not doubt it only hits the DB once.
Am I missing a concept here or have I set it up the wrong way? I have searched high and low for people with similar problems, with no luck, so I doubt there is a bug.
http://localhost:27941/clubs
T+175.2 ms
Reader
13.6 ms
utePageHierarchy Execute System.Collections.Generic.IEnumerable<T>.GetEnumerator GetResults Execute ExecuteStoreCommands
SELECT
[Extent1].[TeamId] AS [TeamId],
[Extent1].[Title] AS [Title],
[Extent1].[TitleShort] AS [TitleShort],
[Extent1].[LogoImageId] AS [LogoImageId],
[Extent1].[Slug] AS [Slug],
(SELECT
COUNT(1) AS [A1]
FROM [dbo].[Athletes] AS [Extent2]
WHERE [Extent1].[TeamId] = [Extent2].[TeamId]) AS [C1]
FROM [dbo].[Teams] AS [Extent1]
WHERE 352 = [Extent1].[CountryId]
http://localhost:27941/clubs
T+175.4 ms
DUPLICATE Reader
13.4 ms
utePageHierarchy Execute System.Collections.Generic.IEnumerable<T>.GetEnumerator GetResults Execute ExecuteStoreCommands
SELECT
[Extent1].[TeamId] AS [TeamId],
[Extent1].[Title] AS [Title],
[Extent1].[TitleShort] AS [TitleShort],
[Extent1].[LogoImageId] AS [LogoImageId],
[Extent1].[Slug] AS [Slug],
(SELECT
COUNT(1) AS [A1]
FROM [dbo].[Athletes] AS [Extent2]
WHERE [Extent1].[TeamId] = [Extent2].[TeamId]) AS [C1]
FROM [dbo].[Teams] AS [Extent1]
WHERE 352 = [Extent1].[CountryId
I use EF4 and have implemented the context like this:
public class BaseController : Controller
{
public ResultsDBEntities _db;
public BaseController()
{
var rootconn = ProfiledDbConnection.Get(GetStoreConnection(ConfigurationManager.ConnectionStrings["ResultsDBEntities"].ConnectionString));
var conn = ProfiledDbConnection.Get(rootconn);
_db = ObjectContextUtils.CreateObjectContext<ResultsDBEntities>(conn);
}
public static DbConnection GetStoreConnection<T>() where T : System.Data.Objects.ObjectContext
{
return GetStoreConnection("name=" + typeof(T).Name);
}
public static DbConnection GetStoreConnection(string entityConnectionString)
{
DbConnection storeConnection;
// Let entity framework do the heavy-lifting to create the connection.
using (var connection = new EntityConnection(entityConnectionString))
{
// Steal the connection that EF created.
storeConnection = connection.StoreConnection;
// Make EF forget about the connection that we stole (HACK!)
connection.GetType().GetField("_storeConnection",
BindingFlags.NonPublic | BindingFlags.Instance).SetValue(connection, null);
// Return our shiny, new connection.
return storeConnection;
}
}
}

I reported this to the Mini Profiler team (http://code.google.com/p/mvc-mini-profiler/issues/detail?id=62&can=1) and they've issued a patch today which appears to fix the issue.
I imagine this will be included in the next release. Hope that helps :)

Related

JDBC Batch insert into Oracle Not working

I'm using JDBC's batch to inserting a million of rows. I was faced with that Oracle driver doesn't work as expected - batch insert takes a long time to work.
I have decided to sniff application's traffic with Wireshark. And what did I see?
Oracle JDBC driver sent first request (1)
then it sending data (2), about 2500 rows
oracle server responds with some package (3)
now all remain data will be send with one-by-one inserts, not batching!
insert into my_table...
insert into my_table...
Why does this happen? How can I fix this?
Table
create table my_table (val number);
Code
import java.math.BigDecimal;
import java.sql.Connection;
import java.sql.PreparedStatement;
import java.sql.SQLException;
public class scratch_1 {
#Test
public void foo() throws SQLException {
String sql = "insert into my_table (val) values (?)";
try (Connection con = getConnection()) {
con.setAutoCommit(false);
try (PreparedStatement ps = con.prepareStatement(sql)) {
for (long i = 0; i < 100_000; i++) {
ps.setBigDecimal(1, BigDecimal.valueOf(i));
ps.addBatch();
}
ps.executeBatch();
ps.clearBatch();
}
con.commit();
}
}
private Connection getConnection() throws SQLException {
String url = "jdbc:oracle:thin:#localhost:1521:orcl";
String user = "my_user";
String password = "my_password";
return java.sql.DriverManager.getConnection(url, user, password);
}
}
Wireshark code to illustrate what is happened:
Environment
$ java -version
java version "1.8.0_181"
Java(TM) SE Runtime Environment (build 1.8.0_181-b13)
Java HotSpot(TM) 64-Bit Server VM (build 25.181-b13, mixed mode)
Oracle Database 12.2.0.1 JDBC Driver
Server: Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit
Running query multiple times does not help - same result.
250k rows "batch" inserted in 465s
At the Server side v$sql:
SELECT *
FROM
(SELECT REGEXP_SUBSTR (sql_text, 'insert into [^\(]*') sql_text,
sql_id,
TRUNC(
CASE
WHEN SUM (executions) > 0
THEN SUM (rows_processed) / SUM (executions)
END,2) rows_per_execution
FROM v$sql
WHERE parsing_schema_name = 'MY_SCHEMA'
AND sql_text LIKE 'insert into%'
GROUP BY sql_text,
sql_id
)
ORDER BY rows_per_execution ASC;
Problem is solved
Thank you for all your responses. I'm very grateful to you!
My previous example doesn't describe real problem. Sorry that did not give the whole picture at once.
I simplified it to such a state that I lost processing of null values.
Check please example above I have updated it.
If I use java.sql.Types.NULL Oracle JDBC driver used theVarcharNullBinder for null values - it somehow leads to such a strange work. I think that Driver is used batch until first null with not specified type, after null it is fallback to one-by-one insert.
After change it to java.sql.Types.NUMERIC for number column driver used theVarnumNullBinder and correctly work with it - fully batching.
Code
import java.math.BigDecimal;
import java.sql.Connection;
import java.sql.PreparedStatement;
import java.sql.SQLException;
public class scratch_1 {
#Test
public void foo() throws SQLException {
String sql = "insert into my_table (val) values (?)";
try (Connection con = getConnection()) {
con.setAutoCommit(false);
try (PreparedStatement ps = con.prepareStatement(sql)) {
for (long i = 0; i < 100_000; i++) {
if (i % 2 == 0) {
//the real problem was here:
//ps.setNull(1, Types.NULL); //wrong way!
ps.setNull(1, Types.NUMERIC); //correct
} else {
ps.setBigDecimal(1, BigDecimal.valueOf(i));
}
ps.addBatch();
}
ps.executeBatch();
ps.clearBatch();
}
con.commit();
}
}
private Connection getConnection() throws SQLException {
String url = "jdbc:oracle:thin:#localhost:1521:orcl";
String user = "my_user";
String password = "my_password";
return java.sql.DriverManager.getConnection(url, user, password);
}
}
I am not sure as to where this limit comes from. However, Oracle JDBC Developer's Guide gives this recommendation:
Oracle recommends to keep the batch sizes in the range of 100 or less. Larger batches provide little or no performance improvement and may actually reduce performance due to the client resources required to handle the large batch.
Of course larger batch sizes may be used but they do not necessarily increase the performance as you've witnessed. One should use the batch size which is optimal for the use case and JDBC driver/DB used. You probably should use batches of 2500 in your case to see the best performance benefit.

Oracle database change notification with ODP.NET doesn't work

I'm complete newbie to Oracle DB trying to enable DB change notifications.
private void RegisterNotification()
{
const string connstring = "Data Source=ORA_DB;User Id=USER;Password=pass;";
try
{
var connObj = new OracleConnection(connstring);
connObj.Open();
var cmdObj = connObj.CreateCommand();
cmdObj.CommandText = "SELECT * FROM MYTABLE";
var dep = new OracleDependency(cmdObj);
dep.QueryBasedNotification = false;
dep.OnChange += new OnChangeEventHandler(OnNotificationReceived);
cmdObj.ExecuteNonQuery();
connObj.Close();
connObj.Dispose();
connObj = null;
}
catch (Exception ex)
{
MessageBox.Show(ex.ToString());
}
}
public static void OnNotificationReceived(object src, OracleNotificationEventArgs arg)
{
MessageBox.Show("Table has changed!");
}
I've executed "GRANT CHANGE NOTIFICATION TO USER;" but nothing happens when I change the table data neither manually nor programmatically. Query-based notifications also don't work. I suppose I miss something in Oracle configuration.
I have Oracle 11.2 standard edition.
The CHANGE NOTIFICATION permission is not on the features of the Standard Edition for latest versions :
Licensing information
Oracle TimesTen Application-Tier Database Cache :
Data access using PL/SQL, JDBC, ODBC, ttClasses, OCI, and Pro*C/C++
interfaces Transaction Log API (XLA) for change notification
Multi-node Cache Grid
...
SE2 : N
EE : Y (extra-cost option)
Try to execute your soft under admin permission and as console application. when we used to deal with it we faced with the same stuff. we hadn't managed to use it in webservices.

JDBC getObject on table user returns error : "access to class (class) is prohibited"

I am having a weird response while trying to get all columns of users in the table "user" of my "mysql" database.
I am using Google Cloud SQL, accessing it from Google Apps Script with the JDBC class. Here is my code :
function displayingOneUser(){
var connection = Jdbc.getCloudSqlConnection("jdbc:google:rdbms://blablabla/mysql", "root", rootPwd);
var mySqlStatement = connection.createStatement();
var resultSet = mySqlStatement.executeQuery("select * from user");
if(resultSet.next()){
var str="";
for(var i=1; i<43; i++){
str=str.concat(resultSet.getObject(i)+" | ");
Logger.log("str so far : "+str);
}
Logger.log("Displaying first user : "+str+".");
}
mySqlStatement.close();
connection.close();
}
I get the following error : "Access to class "(class)" is prohibited.", when it reaches column 33 or 34 (I don't remember which one).
If I use getString(), then it works without a problem.
Any idea where that could come from ?
Cheers,
Victor
I was able to reproduce this problem and I believe this is indeed a bug. This has to do with blob columns in the user table in the default mysql DB
Trying to read any of those columns with getObject() breaks. getString() works.
Good find, please log a bug in the Issue Tracker so we can dig into this.
Its column 34, ssl_cipher that is the first blob column that breaks. Others break too. Simplest repro case -
function displayingOneUser(){
var conn = Jdbc.getCloudSqlConnection("jdbc:google:rdbms://INSTANCE_NAME/mysql");
var sql = conn.createStatement();
var resultSet = sql.executeQuery("select ssl_cipher from user");
if(resultSet.next()){
resultSet.getObject(1);//this line will break
}
sql.close();
conn.close();
}

Oracle sessions stay open after closing connection

While testing a new application, we came across an issue that sometimes a stored proc takes over 1 minute to execute and causes a time out. It was not 1 stored proc in particulary, it could be any.
Trying to reproduce the issue I've created a small (local) testapp that calls the same stored proc in different threads (code below).
Now it seems that the Oracle-sessions are still there. Inactive. And the CPU of the Oracle-server hits 100%.
I use the System.Data.OracleClient
I'm not sure if one is related to the other, but it slows down the time needed to get an answer from the database.
for (int index = 0; index < 1000; ++index)
{
ThreadPool.QueueUserWorkItem(GetStreet, index);
_runningThreads++;
WriteThreadnumber(_runningThreads);
}
private void GetStreet(object nr)
{
const string procName = "SPCK_ISU.GETPREMISESBYSTREET";
DataTable dataTable = null;
var connectionstring = ConfigurationManager.ConnectionStrings["CupolaDB"].ToString();
try
{
using (var connection = new OracleConnection(connectionstring))
{
connection.Open();
using (var command = new OracleCommand(procName, connection))
{
//Fill parameters
using (var oracleDataAdapter = new OracleDataAdapter(command))
{
//Fill datatable
}
}
}
}
finally
{
if (dataTable != null)
dataTable.Dispose();
}
}
EDIT:
I just let the dba make a count of the open sessions and there are 105 sessions that stay open-inactive. After closing my application, the sessions are removed.
Problem is solved.
We hired an Oracle-expert to take a look at this and the problem was caused due to some underlying stored procedures that took a while to execute and consumed a lot of CPU.
After the necessary tuning, everything runs smoothly.

Windows Workflow Foundation 4.0 and Tracking

I'm working with the Beta 2 version of Visual Studio 2010 to get some advanced learning using WF4. I've been working with the SqlTracking Sample in the WF_WCF_Samples SDK, and have gotten a pretty good understanding of how to emit and store tracking data in a SQL Database, but haven't seen anything on how to query the data when needed. Does anyone know if there are any .Net classes that are to be used for querying the tracking data, and if so are there any known samples, tutorials, or articles that describe how to query the tracking data?
According to Matt Winkler, from the Microsoft WF4 Team, there isn't any built in API for querying the tracking data, the developer must write his/her own.
These can help:
WorkflowInstanceQuery Class
Workflow Tracking and Tracing
Tracking Participants in .NET 4 Beta 1
Old question, I know, but there is actually a more or less official API in AppFabric: Windows Server AppFabric Class Library
You'll have to find the actual DLL's in %SystemRoot%\AppFabric (after installing AppFabric, of course). Pretty weird place to put it.
The key classes to look are at are SqlInstanceQueryProvider, InstanceQueryExecuteArgs. The query API is asynchronous and can be used something like this (C#):
public InstanceInfo GetWorkflowInstanceInformation(Guid workflowInstanceId, string connectionString)
{
var instanceQueryProvider = new SqlInstanceQueryProvider();
// Connection string to the instance store needs to be set like this:
var parameters = new NameValueCollection()
{
{"connectionString", connectionString}
};
instanceQueryProvider.Initialize("Provider", parameters);
var queryArgs = new InstanceQueryExecuteArgs()
{
InstanceId = new List<Guid>() { workflowInstanceId }
};
// Total ruin the asynchronous advantages and use a Mutex to lock on.
var waitEvent = new ManualResetEvent(false);
IEnumerable<InstanceInfo> retrievedInstanceInfos = null;
var query = instanceQueryProvider.CreateInstanceQuery();
query.BeginExecuteQuery(
queryArgs,
TimeSpan.FromSeconds(10),
ar =>
{
lock (synchronizer)
{
retrievedInstanceInfos = query.EndExecuteQuery(ar).ToList();
}
waitEvent.Set();
},
null);
var waitResult = waitEvent.WaitOne(5000);
if (waitResult)
{
List<InstanceInfo> instances = null;
lock (synchronizer)
{
if (retrievedInstanceInfos != null)
{
instances = retrievedInstanceInfos.ToList();
}
}
if (instances != null)
{
if (instances.Count() == 1)
{
return instances.Single();
}
if (!instances.Any())
{
Log.Warning("Request for non-existing WorkflowInstanceInfo: {0}.", workflowInstanceId);
return null;
}
Log.Error("More than one(!) WorkflowInstanceInfo for id: {0}.", workflowInstanceId);
}
}
Log.Error("Time out retrieving information for id: {0}.", workflowInstanceId);
return null;
}
And just to clarify - this does NOT give you access to the tracking data, which are stored in the Monitoring Database. This API is only for the Persistence Database.

Resources