In my integration test, I am creating an H2 database with two schemas, A and B. A is set as the default schema, like it is in the normal setup for the application when it is running with a PostgreSQL database. During the integration test, I am starting both the H2 database and an embedded Tomcat server and execute the SQL files to initialise the database via Liquibase.
All models that are connected to tables in schema A are annotated with #Table("tablename"), whereas models for schema B are annotated with #Table("B.tablename"). When I call a REST endpoint in the embedded server, activeJDBC warns me:
WARN org.javalite.activejdbc.Registry - Failed to retrieve metadata for table: 'B.tablename'. Are you sure this table exists? For some databases table names are case sensitive.
When I then try to access a table in schema B in my Java code, activeJDBC throws the following exception (which is expected after the previous warning):
org.javalite.activejdbc.InitException: Failed to find table: B.tablename
at org.javalite.activejdbc.MetaModel.getAttributeNames(MetaModel.java:248)
at org.javalite.activejdbc.Model.hydrate(Model.java:207)
at org.javalite.activejdbc.ModelDelegate.instance(ModelDelegate.java:247)
at org.javalite.activejdbc.ModelDelegate.instance(ModelDelegate.java:241)
...
Accessing tables in schema A works as expected.
I am sure that the tables in schema B are actually created and contain data, because on top of Liquibase log entries for executing the files I can also access the database directly and get the table content as a result:
Initialisation of Database and server:
private String H2_CONNECTION_STRING = "jdbc:h2:mem:testdb;INIT=CREATE SCHEMA IF NOT EXISTS A\\;SET SCHEMA A\\;CREATE SCHEMA IF NOT EXISTS B\\;";
#Before
public void initializeDatabase() {
connection = DriverManager.getConnection(H2_CONNECTION_STRING);
Statement stat = connection.createStatement();
stat.execute("GRANT ALTER ANY SCHEMA TO PUBLIC");
LiquibaseInitialisation.initH2(H2_CONNECTION_STRING); // execute SQL scripts
EmbeddedServer.startServer();
}
Query to print content of B.tablename:
Logger Log = LoggerFactory.getLogger("test");
Statement stat = connection.createStatement();
stat.execute("SELECT * FROM B.tablename;");
connection.commit();
resultSet = stat4.getResultSet();
rsmd = resultSet.getMetaData();
columnsNumber = rsmd.getColumnCount();
while (resultSet.next()) {
builder = new StringBuilder();
for (int i = 1; i <= columnsNumber; i++) {
builder.append(resultSet.getString(i));
builder.append(" ");
}
Log.info(builder.toString());
}
This produces the desired output of the content of B.tablename.
The question is this: Why doesn't activeJDBC find the tables in schema B in the H2 database when it's clearly present, but works flawlessly in PostgreSQL? Am I missing something with regards to schemas in H2 or activeJDBC?
Please, log this as an issue: https://github.com/javalite/activejdbc/issues and provide full instructions to replicate this condition. Best if you can provide a small project.
Related
Below is the snippet of the code of Spring boot JDBC item reader which call paging Query query provider.
final SqlPagingQueryProviderFactoryBean testVar = new SqlPagingQueryProviderFactoryBean(); SqlPagingQueryProviderFactoryBean.setDataSource(dataSource); SqlPagingQueryProviderFactoryBean.setSelectClause("select *"); SqlPagingQueryProviderFactoryBean.setFromClause("from "+ tableName); SqlPagingQueryProviderFactoryBean.setWhereClause("where processtime is NULL AND RECORN_NUM BETWEEN :startPos AND :endPos) return ..
Error : Org.springframework.jdbc.uncategorizedSQLException : StatementCallBack; uncategorized SQLException for SQL [ SELECT * From ( SELECT * FROM table_name WHERE processtime is NULL AND RECORN_NUM BETWEEN :startPos AND :endPos order by RECORD_NUM <=100]; SQL state [72000]; error code[10008]: ORA-10008: not all variables bound; nested Exception
I am trying to read the data from the oracle table via spring boot batch job using a JDBC item reader.
First, SQL keywords like select, from, etc can be omitted.
The problem is that you are not providing sorting keys. You need to provide a sort key, as mentioned in the Javadoc of AbstractSqlPagingQueryProvider. This impacts the way SQL queries are generated.
If you declare the SqlPagingQueryProviderFactoryBean as a bean, the validation of sort keys should be triggered and you would have an exception earlier at configuration time.
I am using Mirth Connect 3.5.0.8232. I have created a persisted connection to an Oracle database and using it throughout my source and destination connectors. One of the methods Mirth provides for talking with the database is executeUpdateAndGetGeneratedKeys. It would be quite useful for insert statements that would return the primary keys for the inserted rows.
My question is - how do you specify WHICH columns to return? Running the provided function works, but returns ROWID in the CachedRowSet, which is not what I want.
As far as I understood, which columns to return depends on the type of the database, and every database behaves differently. I am interested in Oracle specifically.
Thank you.
The executeUpdateAndGetGeneratedKeys method uses the Statement.RETURN_GENERATED_KEYS flag to signal to the driver that auto-generated keys should be returned. However, from the Oracle docs:
If key columns are not explicitly indicated, then Oracle JDBC drivers cannot identify which columns need to be retrieved. When a column name or column index array is used, Oracle JDBC drivers can identify which columns contain auto-generated keys that you want to retrieve. However, when the Statement.RETURN_GENERATED_KEYS integer flag is used, Oracle JDBC drivers cannot identify these columns. When the integer flag is used to indicate that auto-generated keys are to be returned, the ROWID pseudo column is returned as key. The ROWID can be then fetched from the ResultSet object and can be used to retrieved other columns.
So instead, try using their suggestion of passing in a column name array to prepareStatement:
var dbConn;
try {
dbConn = DatabaseConnectionFactory.createDatabaseConnection('oracle.jdbc.driver.OracleDriver','jdbc:oracle:thin:#localhost:1521:DBNAME','user','pass');
// Create a Java String array directly
var keyColumns = java.lang.reflect.Array.newInstance(java.lang.String, 1);
keyColumns[0] = 'id';
var ps = dbConn.getConnection().prepareStatement('INSERT INTO tablename (columnname) VALUES (?)', keyColumns);
try {
// Set variables here
ps.setObject(1, 'test');
ps.executeUpdate();
var result = ps.getGeneratedKeys();
result.next();
var generatedKey = result.getObject(1);
logger.info(generatedKey);
} finally {
ps.close();
}
} finally {
if (dbConn) {
dbConn.close();
}
}
Considering a Spring Boot, neo4j environment with Spring-Data-neo4j-4 I want to make a delete and get an error message when it fails to delete.
My problem is since the Repository.delete() returns void I have no ideia if the delete modified anything or not.
First question: is there any way to get the last query affected lines? for example in plsql I could do SQL%ROWCOUNT
So anyway, I tried the following code:
public void deletesomething(Long somethingId) {
somethingRepository.delete(getExistingsomething(somethingId).getId());
}
private something getExistingsomething(Long somethingId, int depth) {
return Optional.ofNullable(somethingRepository.findOne(somethingId, depth))
.orElseThrow(() -> new somethingNotFoundException(somethingId));
}
In the code above I query the database to check if the value exist before I delete it.
Second question: do you recommend any different approach?
So now, just to add some complexity, I have a cluster database and db1 can only Create, Update and Delete, and db2 and db3 can only Read (this is ensured by the cluster sockets). db2 and db3 will receive the data from db1 from the replication process.
For what I seen so far replication can take up to 90s and that means that up to 90s the database will have a different state.
Looking again to the code above:
public void deletesomething(Long somethingId) {
somethingRepository.delete(getExistingsomething(somethingId).getId());
}
in debug that means:
getExistingsomething(somethingId).getId() // will hit db2
somethingRepository.delete(...) // will hit db1
and so if replication has not inserted the value in db2 this code wil throw the exception.
the second question is: without changing those sockets is there any way for me to delete and give the correct response?
This is not currently supported in Spring Data Neo4j, if you wish please open a feature request.
In the meantime, perhaps the easiest work around is to fall down to the OGM level of abstraction.
Create a class that is injected with org.neo4j.ogm.session.Session
Use the following method on Session
Example: (example is in Kotlin, which was on hand)
fun deleteProfilesByColor(color : String)
{
var query = """
MATCH (n:Profile {color: {color}})
DETACH DELETE n;
"""
val params = mutableMapOf(
"color" to color
)
val result = session.query(query, params)
val statistics = result.queryStatistics() //Use these!
}
I have run sqlmetal.exe agaisnt my database.
SqlMetal.exe /server:server /database:dbname /code:mapping.cs
I have included this into my solution. So I can now create an object for each of the database tables. Great. I now wish to use ling to query by database. Can I presume that none of the connection etc is handled by the output of sqlmetal.exe. If this is correct what ways can I use ling to query my database?
Does the generated code include a Data Context (a class which inherits from System.Data.Linq.DataContext)? If so, then that's probably what you're looking for. Something like this:
var db = new SomeDataContext();
// You can also specify a connection string manually in the above constructor if you want
var records = db.SomeTable.Where(st => st.id == someValue);
// and so on...
i am using hsqldb as database. i create LmexPostParamDao which has a method insertLmexPostParam(NameValuePostParamVO nameValuePostParamVO) which will insert data in databse. for testing this method i used JUnit test to insert some data in hsqldb.
my JUnit test method is as below:
#Test
public void testInsertLmexPostParam(){
String lmexPostParamId = UUID.randomUUID().toString();
NameValuePostParamVO nameValuePostParamVO = new NameValuePostParamVO();
nameValuePostParamVO.setLmexPostParamId(lmexPostParamId);
nameValuePostParamVO.setParamName("adapter_id");
nameValuePostParamVO.setParamValue("7");
lmexPostParamDao.insertLmexPostParam(nameValuePostParamVO);
}
my insert method is as below:
#Override
public void insertLmexPostParam(NameValuePostParamVO nameValuePostParamVO) {
String insertQuery = "insert into LMEX_POST_PARAM(lmex_post_param_id, param_name, param_value) values (?,?,?)";
String[] paramArr = { nameValuePostParamVO.getLmexPostParamId(), nameValuePostParamVO.getParamName(), nameValuePostParamVO.getParamValue()};
int update = adapterJdbcTemplate.update(insertQuery, paramArr);
System.out.println(update);
}
when i run my test case it's returning me 1 as output which is result of adapterJdbcTemplate. which means data inserted sucessfully. but when i see my database it is not showing me a that row which inserted. and when i debug my testcase method with same values it's give a exception : Data integrity violation exception. and after this exception when i see my database it's showing that row in my database. what will be the problem. i do not. when i see code it's look like everything is fine. help me to resolve this.
Thank you
Check your CREATE TABLE statement for LMEX_POST_PARAM. The CHAR and VARCHAR types must be defined as CHAR(N) and VARCHAR(N), with N large enough to accept the values that you insert. For example, VARCHAR(100).
If your problem is the database is not showing the successful insert, then you should create your database with WRITE_DELAY = 0 or false. See the HSQLDB documentation for the version you are using.