Close postgres transaction - Elixir - phoenix-framework

I am trying raw sql query with Elixir,
Postgres connection opens and transaction successful
But after that, query transaction is not closing, it remains in Idle state.
qry = "select id from table where id = 1"
{:ok, pid} = Postgrex.start_link(
hostname: "localhost",
username: "user",
password: "password",
database: "db",
pool_size: 100)
Postgrex.query!(pid, qry, [])
In Postgres,
PID Query status
2323 qry Idle
How to close the transaction after completing the execution.

So, Postgrex.query!/4 runs an extended query. You can close an extended query with Postgrex.close!/3, but looks like only in this way:
{:ok, conn} = Postgrex.start_link([])
query = Postgrex.prepare!(conn, :query_name, "SELECT * ....")
result = Postgrex.query!(conn, query, [])
Postgrex.close!(conn, query)

Related

H2 show value of DB_CLOSE DELAY (set by SET DB_CLOSE_DELAY)

The H2 Database has a list of commands starting with SET, in particular SET DB_CLOSE_DELAY. I would like to find out what the value of DB_CLOSE_DELAY is. I am using JDBC. Setting is easy
cx.createStatement.execute("SET DB_CLOSE_DELAY 0")
but none of the following returns the actual value of DB_CLOSE_DELAY:
cx.createStatement.executeQuery("DB_CLOSE_DELAY")
cx.createStatement.executeQuery("VALUES(#DB_CLOSE_DELAY)")
cx.createStatement.executeQuery("GET DB_CLOSE_DELAY")
cx.createStatement.executeQuery("SHOW DB_CLOSE_DELAY")
Help would be greatly appreciated.
You can access this and other settings in the INFORMATION_SCHEMA.SETTINGS table - for example:
String url = "jdbc:h2:mem:;DB_CLOSE_DELAY=3";
Connection conn = DriverManager.getConnection(url, "sa", "the password goes here");
Statement stmt = conn.createStatement();
ResultSet rs = stmt.executeQuery("SELECT * FROM INFORMATION_SCHEMA.SETTINGS where name = 'DB_CLOSE_DELAY'");
while (rs.next()) {
System.out.println(rs.getString("name"));
System.out.println(rs.getString("value"));
}
In this test, I use an unnamed in-memory database, and I explicitly set the delay to 3 seconds when I create the DB.
The output from the print statements is:
DB_CLOSE_DELAY
3

PoolLimitSQLException with temporary table [duplicate]

This question already has answers here:
JDBC MySql connection pooling practices to avoid exhausted connection pool
(2 answers)
Closing JDBC Connections in Pool
(3 answers)
Do you need to close the connection you get from jdbc connection pool? [duplicate]
(2 answers)
Closed 3 years ago.
I need to use a global temporary table as proxy to read a BLOB column from a database link in an application I've inherited. Using ON COMMIT DELETE ROWS I wasn't able to read my values back (as if every query was being ran in a different connection from pool) so I eventually settled with:
CREATE GLOBAL TEMPORARY TABLE TMP_FOO (
ID VARCHAR2(64 BYTE) NOT NULL ENABLE,
FOO BLOB NOT NULL ENABLE,
CONSTRAINT TMP_FOO_PK PRIMARY KEY (ID) ENABLE
) ON COMMIT PRESERVE ROWS;
While this works, I'm now repeatedly getting:
weblogic.jdbc.extensions.PoolLimitSQLException: weblogic.common.resourcepool.ResourceLimitException: No resources currently available in pool foods to allocate to applications, please increase the size of the pool and retry
WebLogic console certainly shows 15 active connections. But I am the only user!
What am I doing wrong that's preventing connections from being reused?
// Custom method that returns java.sql.Connection from javax.naming.InitialContext.lookup(String)
Connection conn = ds.getConnection();
PreparedStatement st = null;
ResultSet rs = null;
String sql;
Blob foo;
try {
sql = "INSERT INTO TMP_FOO (ID, FOO) SELECT ?, FOO FROM REMOTE_TABLE#DBLINK WHERE REMOTE_ID = ?";
st = conn.prepareStatement(sql);
st.setString(1, "Random UUID here");
st.setString(2, "20");
System.out.println("Rows inserted: " + st.executeUpdate());
sql = "SELECT FOO FROM TMP_FOO WHERE ID = ?";
st = conn.prepareStatement(sql);
st.setString(1, "Random UUID here");
rs = st.executeQuery();
if (!rs.next()) {
throw new RuntimeException("Not found");
}
foo = rs.getBlob("foo");
} catch(Exception e) {
// Final code will do something meaningful here
System.out.println("[ERROR] " + e.getMessage());
} finally {
conn.rollback();
}

Is there any way to view the physical SQLs executed by Calcite JDBC?

Recently I am studying Apache Calcite, by now I can use explain plan for via JDBC to view the logical plan, and I am wondering how can I view the physical sql in the plan execution? Since there may be bugs in the physical sql generation so I need to make sure the correctness.
val connection = DriverManager.getConnection("jdbc:calcite:")
val calciteConnection = connection.asInstanceOf[CalciteConnection]
val rootSchema = calciteConnection.getRootSchema()
val dsInsightUser = JdbcSchema.dataSource("jdbc:mysql://localhost:13306/insight?useSSL=false&serverTimezone=UTC", "com.mysql.jdbc.Driver", "insight_admin","xxxxxx")
val dsPerm = JdbcSchema.dataSource("jdbc:mysql://localhost:13307/permission?useSSL=false&serverTimezone=UTC", "com.mysql.jdbc.Driver", "perm_admin", "xxxxxx")
rootSchema.add("insight_user", JdbcSchema.create(rootSchema, "insight_user", dsInsightUser, null, null))
rootSchema.add("perm", JdbcSchema.create(rootSchema, "perm", dsPerm, null, null))
val stmt = connection.createStatement()
val rs = stmt.executeQuery("""explain plan for select "perm"."user_table".* from "perm"."user_table" join "insight_user"."user_tab" on "perm"."user_table"."id"="insight_user"."user_tab"."id" """)
val metaData = rs.getMetaData()
while(rs.next()) {
for(i <- 1 to metaData.getColumnCount) printf("%s ", rs.getObject(i))
println()
}
result is
EnumerableCalc(expr#0..3=[{inputs}], proj#0..2=[{exprs}])
EnumerableHashJoin(condition=[=($0, $3)], joinType=[inner])
JdbcToEnumerableConverter
JdbcTableScan(table=[[perm, user_table]])
JdbcToEnumerableConverter
JdbcProject(id=[$0])
JdbcTableScan(table=[[insight_user, user_tab]])
There is a Calcite Hook, Hook.QUERY_PLAN that is triggered with the JDBC query strings. From the source:
/** Called with a query that has been generated to send to a back-end system.
* The query might be a SQL string (for the JDBC adapter), a list of Mongo
* pipeline expressions (for the MongoDB adapter), et cetera. */
QUERY_PLAN;
You can register a listener to log any query strings, like this in Java:
Hook.QUERY_PLAN.add((Consumer<String>) s -> LOG.info("Query sent over JDBC:\n" + s));
It is possible to see the generated SQL query by setting calcite.debug=true system property. The exact place where this is happening is in JdbcToEnumerableConverter. As this is happening during the execution of the query you will have to remove the "explain plan for"
from stmt.executeQuery.
Note that by setting debug mode to true you will get a lot of other messages as well as other information regarding generated code.

Char(3) field #Id is not cacheable in eclipselink jpa

For historic reasons i have some tables with char(3) primary key in our database. For example the country table.
When i find the entity with:
String id = "D ";
Country c = em.find(Country.class, id);
i can see afterwards:
c.getId() --> "D" and not "D "
The entitiy is read again and again from the database. The caching does not work for some reason. I guess in the cache the id is existing as "D" and not as "D ".
20130423 09:15:14,495 FINEST query Execute query ReadObjectQuery(name="country" referenceClass=Country )
20130423 09:15:14,498 FINEST connection reconnecting to external connection pool
20130423 09:15:14,498 FINE sql SELECT countryID, countryNAME, countryTELCODE, countryTOPLEVELDOMAIN, countryINTTELPREFIX FROM geo.COUNTRY WHERE (countryID = ?)
bind => [D ]
20130423 09:15:14,508 FINEST query Execute query ReadObjectQuery(name="country" referenceClass=Country )
20130423 09:15:14,508 FINEST connection reconnecting to external connection pool
20130423 09:15:14,508 FINE sql SELECT countryID, countryNAME, countryTELCODE, countryTOPLEVELDOMAIN, countryINTTELPREFIX FROM geo.COUNTRY WHERE (countryID = ?)
bind => [D ]
I tried to set the #Column(length=3) but it has no effect.
Does anybody has an idea why the cache does not work like it should.
Thanks Hasan
By default EclipseLink trims trailing spaces from CHAR fields. This can be disabled.
See,
http://wiki.eclipse.org/EclipseLink/FAQ/JPA#How_to_avoid_trimming_the_trailing_spaces_on_a_CHAR_field.3F

Will this kind of bash work as expected?

#!/bin/bash
MYSQL="/usr/bin/mysql -uroot "
function create_db()
{
local db_name=${1}
$MYSQL<<!
create database IF NOT EXISTS ${db_name};
!
}
###-------------------------tb_bind_userid_xxx-------------------------------------
function create_table_userid
{
$MYSQL << !
create table if NOT EXISTS bind_pay.tb_bind_userid_${1}(
b_userid bigint not null,
b_membercode bigint not null ,
PRIMARY KEY (b_userid)
) ENGINE=MyISAM DEFAULT CHARSET=utf8;
!
}
Will $MYSQL be persistent across function calls,or reconnect each time?
If it reconnects every time,I don't think create_table_userid will work as expected though,because it hasn't specify a database name yet.
Well, because you will be calling the function each time you want to create the table, you will call mysql to connect to the database each time. If you want persistent connection, one way is to use a mysql library that is supported by most major programming languages these days, Perl/Python/Ruby/PHP etc. You can make a DB connection, then do whatever stuff you want, then finally close the connection. for example in the documentation of Python/Mysql
import MySQLdb
conn = MySQLdb.connect (host = "localhost",
user = "testuser",
passwd = "testpass",
db = "test")
cursor = conn.cursor ()
cursor.execute ("SELECT VERSION()")
row = cursor.fetchone ()
print "server version:", row[0]
cursor.close ()
conn.close ()
As you can see, a connection conn is opened to connect to database. Then using the connection handle (or rather database handle), stuff are done and then finally, the connection is closed.
$MYSQL is just a variable, so your code runs mysql each time it calls one of these functions.
You can create a persistant connection to mysql easily enough; just have your program write its sql to output, then pipe the whole results into mysql:
(
create_table "foo"
create_table "bar"
) | mysql
create_table() {
cat <<!
create table $1 ....
!
}

Resources