Websphere connection pool issue i.e DSRA9110E - websphere

LOGGER.debug("Connection Status Disb isClosed = " + conn.isClosed());
// returns true.
crsDisbDetailstmp = DataAccess.getData("select 1 cnt from dual", conn, new String[] {});
crsDisbDetailstmp.first();
LOGGER.debug("crsDisbDetailstmp"+ crsDisbDetailstmp.getString("cnt"));
DataAccess.executeProc("PRC_MCLR_TRNPRCDTL", new String[]{strOrgId, strAccountid,strFixedRate ,strModifiers }, conn);
Exception occured while executing last statement i.e execute procedure.
Exception=com.ibm.websphere.ce.cm.ObjectClosedException: DSRA9110E: Connection is closed.
I search a lot on google it shows this exception is occurred because connection is closed also i checked with conn.isclosed() which return true..
But If connection is closed then how i am able to fire select queries???
Please help me to figure it out as i worked on JBOSS only and first time on Websphere

Related

java.sql.SQLException: Could not commit with auto-commit set on

I have few insert and update operations in my application. Everything is running fine in Tomcat server. But while deploying in Oracle Weblogic server I'm getting the below exception
java.sql.SQLException: Could not commit with auto-commit set on
In my executUpdate method, I have set DatabaseConnection.setAutoCommit to false at the beginning
dbConnection.setAutoCommit(false, id);
After the PreparedStatement's executeUpdate, if the returned integer is > 0 I'm again setting the setAutoCommit to true something like below:
dbConnection.setIsolationLevel(2,id);
count = PreparedStatement.executeUpdate();
if (cnt > 0)
dbConn.setAutoCommit(true,id);
After all the operations in finally block, we check for DatabaseConnection is null or not and then close it as something like below:
if(dbConnection!=null)
{
dbConnection.close(tranid);
dbConnection= null;
}
The close method we have mocked something like below within a try catch and a message within this catch block's is getting printed :
if(connection!=null)
{
connection.commit();
connection.close();
connection = null;
}
Someone please help me out with this as a proper commit should occur in realtime as I tried setting AutoCommit to false in the below part and it worked without the SQLException.
if (count > 0)
dbConnection.setAutoCommit(false,id);
My worry is, this is not the solution I'm looking for as this causes problem in realtime

TKProf (Oracle event 10046) on Spring Boot/JDBC

I'm trying to start oracle tracing through invoking direct JDBC calls. I'm obtaining my connection from Spring (boot/jdbc). Then I run the TKProf commands through statements... execute the query and print to the log.
The 3 statements below are returning false. If I use this same statements through Intellij's console I will get the intended results and my *.trc file is properly generated.
try (final Connection connection = DataSourceUtils.getConnection(dataSource)) {
log.debug(query);
final Long maxCount = findMaxCount();
boolean traceIdSet = connection.createStatement().execute("ALTER SESSION SET TRACEFILE_IDENTIFIER = '" + traceId + "'");
boolean traceEnabled = connection.createStatement().execute("ALTER SESSION SET EVENTS '10046 trace name context forever, level 8'");
final PreparedStatement stmt = connection.prepareStatement(query);
map(consumer, stmt.executeQuery(query));
boolean traceIdOff = connection.createStatement().execute("ALTER SESSION SET EVENTS '10046 trace name context off'");
log.debug("|" + traceIdSet + "|" + traceEnabled + "|" + traceIdOff + "| ____________________ DONE __________________________");
} catch (SQLException e) {
log.error("Error Performing the Query", e);
}
It has to be something in my configuration... I mean, java thin driver can do it because I can do it over the IDE... so I have to be missing some other stuff, maybe a Spring Boot convention that I should change.
Could you please help, any input is valuable.
Thanks!
My bad, the real issue was that I wasn't getting proper response from...
SELECT value FROM v$diag_info
Where I couldn't found the trace file, but only some others...
Nevertheless the trc files are in place, so there is no problem with Spring Boot/JDBC for enabling TKProf.

C# NEST Bulk api failing with System.IO.IOException [duplicate]

This question already has an answer here:
Elasticsearch bulk insert with NEST returns es_rejected_execution_exception
(1 answer)
Closed 5 years ago.
I am trying to bulk insert data from SQL to ElasticSearch index. Below is the code I am using and total number of records is around 1.5 million. I think it something to do with connection setting but I am not able to figure it out. Can someone please help with this code or suggest better way to do it?
public void InsertReceipts
{
IEnumerable<Receipts> receipts = GetFromDB() // get receipts from SQL DB
const string index = "receipts";
var config = ConfigurationManager.AppSettings["ElasticSearchUri"];
var node = new Uri(config);
var settings = new ConnectionSettings(node).RequestTimeout(TimeSpan.FromMinutes(30));
var client = new ElasticClient(settings);
var bulkIndexer = new BulkDescriptor();
foreach (var receiptBatch in receipts.Batch(20000)) //using MoreLinq for Batch
{
Parallel.ForEach(receiptBatch, (receipt) =>
{
bulkIndexer.Index<OfficeReceipt>(i => i
.Document(receipt)
.Id(receipt.TransactionGuid)
.Index(index));
});
var response = client.Bulk(bulkIndexer);
if (!response.IsValid)
{
_logger.LogError(response.ServerError.ToString());
}
bulkIndexer = new BulkDescriptor();
}
}
Code works fine but takes around 10 mins to complete. When I try to increase batch size, it fails with below error:
Invalid NEST response built from a unsuccessful low level call on
POST: /_bulk
Invalid Bulk items: OriginalException: System.Net.WebException: The
underlying connection was closed: An unexpected error occurred on a
send. ---> System.IO.IOException: Unable to write data to the
transport connection: An existing connection was forcibly closed by
the remote host. ---> System.Net.Sockets.SocketException: An existing
connection was forcibly closed by the remote host
A good place to start is with batches of 1,000 to 5,000 documents or, if your documents are very large, with even smaller batches.
It is often useful to keep an eye on the physical size of your bulk requests. One thousand 1KB documents is very different from one thousand 1MB documents. A good bulk size to start playing with is around 5-15MB in size.
I had a similar problem. My problem was solved by adding following code, before the ElasticClient connection is established:
System.Net.ServicePointManager.Expect100Continue = false;

Trying to make a search bar to a data grid view and getting a SQLException

I'm trying to create a form with a data grid view, a combo box with columns, and a text box search bar. I've found lots of YouTube videos on line (https://www.youtube.com/watch?v=KaQs1K63Q-o&list=PL4Io1EUy0d2zUOyEiVt-GwI6WzqdEgVbZ&index=15) and I keep getting an exception error. I've tried to look on the manuals, stack overflow forums, videos, and other sources for days and still coming up dry. I am very new to C# and programming SQL to C# so I think its something simple that I'm just not seeing. When I look closer in the exception detail it says that there is an error being returned by SQL Server. I've tried hitting it at all angles but nothing gets fixed.
SqlConnection con = new SqlConnection("Data Source = LAPTOP - 2H9706D7\\THIBEAUX; Initial Catalog = MetroTest; Integrated Security = True");
This is the connection but this part is "2H97067\THIBEAUX; I put an extra "\" to get rid of the escape sequence error.
private void textBox1_TextChanged(object sender, EventArgs e)
{
if (comboBox1.Text == "ID")
{
SqlDataAdapter sda = new SqlDataAdapter("Select * from [Program Tools] where ID like '%" + textBox1.Text + "%'", con);
DataTable data = new DataTable();
sda.Fill(data);//right here the exception is being thrown
ToolDataGrid.DataSource = data;
}
}
An unhandled exception of type 'System.Data.SqlClient.SqlException' occurred in System.Data.dll
Additional information: A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections. (provider: SQL Network Interfaces, error: 26 - Error Locating Server/Instance Specified)
can anyone help me out with this?
And yes I know I shouldn't have put spaces between my Tables names I'm working on renaming them later.

Why does h2 ignore slf4j messages on the first connection when LOG is set?

See sample code & output below (with Slf4j/logback on stdout). I can't find any bug reports on this. I'm using h2 version 1.3.176 (last stable), in-memory mode. It doesn't seem to matter what value is set for the LOG (0, 1 or 2) but just has to be set.
import java.sql.Connection;
import java.sql.DriverManager;
import java.sql.SQLException;
public class H2TraceTest {
public static void main(String[] args) throws SQLException {
System.out.println("Query connection 1");
Connection myConn = DriverManager.getConnection("jdbc:h2:mem:tracetest;TRACE_LEVEL_FILE=4;LOG=2");
myConn.createStatement().execute("SELECT 1");
System.out.println("Query connection 2");
DriverManager.getConnection("jdbc:h2:mem:tracetest").createStatement().execute("SELECT 1");
System.out.println("Query connection 1 again");
myConn.createStatement().execute("SELECT 1");
System.out.println("End");
}
}
Output:
Query connection 1
Query connection 2
16:17:02.955 INFO h2database - jdbc[3]
/**/Connection conn2 = DriverManager.getConnection("jdbc:h2:mem:tracetest", "", "");
16:17:02.958 DEBUG h2database - jdbc[3]
/**/Statement stat2 = conn2.createStatement();
16:17:02.959 DEBUG h2database - jdbc[3]
/**/stat2.execute("SELECT 1");
16:17:02.959 INFO h2database - jdbc[3]
/*SQL #:1*/SELECT 1;
Query connection 1 again
End
I know that the H2 documentation says about TRACE_LEVEL_FILE: it affects all connections. But thats not (fully) correct:
Every connection keeps a lazy reference to the logging system. And if you change that with the special marker TRACE_LEVEL_FILE=4, then that reference isn't changed for all existing connections - but only for those who do their first logging after that change.
So if you use the connection string "jdbc:h2:mem:tracetest;TRACE_LEVEL_FILE=4" everything is as expected, because your session will write no logging message before changing the logging system. Unfortunately the LOG=2 in jdbc:h2:mem:tracetest;TRACE_LEVEL_FILE=4;LOG=2 is evaluated first, because both parameter are written into and read from an unordered Map. And because LOG=2 is generating a log statement, the reference to the log adapter (=4) is never applied to the current session. Only to the next one.
What can you do:
Use only "jdbc:h2:mem:tracetest;TRACE_LEVEL_FILE=4" - LOG=2 is the default anyway. If you need any other log mode you can use connection.createStatement().executeUpdate("SET LOG 1")
Add some default parameters to the connection string until the TRACE_LEVEL_FILE parameter is the first parameter in the map (not really reliable, as the order may depend on the VM)
Discard the first connection at once
Fill in a bug report and wait for the fix (or fix it yourself), as I think this is somehow a bug
I know this is an old question but here is a reliable way to do it (i.e. you can ensure that TRACE_LEVEL_FILE is set to 4 first:
String url = "jdbc:h2:mem:tracetest;INIT=SET TRACE_LEVEL_FILE=4\\;SET DB_CLOSE_DELAY=-1/* for example, i.e. do other stuff */";

Resources