Check if table exists using mybatis and Oracle - oracle

I'm using MyBatis to create an Oracle table called User. If the table exists, it will just display the message Table User already exists and won't create it again. Currently I'm using this method.
public void createTable() {
try {
userMapper.createTable();
} catch (BadSqlGrammarException e) {
log.error("Table User already exists");
}
}
It kind of works by now. But I don't think this is a reliable way to do it, because there are multiple ways to trigger BadSqlGrammarException.
Apart from catching the exception, I also thought about checking if table exists or not first, but I cannot find a way to achieve it without calling a procedure.
Is there an elegant and correct way to check if table exists using Mybatis and Oracle?

I found a way to achieve this.
Add the following text to mybatis mapper file
<select id="checkTableExists" resultType="int">
<![CDATA[
SELECT COUNT(*) FROM user_tables WHERE table_name = 'CHECKSTATUS_LOG'
]]>
</select>
Declare it in the mapper class(along with createTable method, of course)
public interface CheckStatusLogMapper {
void createTable();
int checkTableExists();
}
Then you can use it like this
public void createTableIfNotExists) {
boolean b = checkTableExists();
if(!b) {
checkStatusLogMapper.createTable();
}
}

Related

A way to update multiple records together?

I am trying to see if there is a way to improve the way data is inserted and updated.
I am using ORACLE DB with JDBC.
The current way i'm doing is to update (e.g.)customer record by using a FOR loop after checking if toUpdate is true . An Example such as the sample code below, followed by calling an existing DAO update() to do so. But this would not allow for the UPSERT of multiple data together.
However, is there a better way to UPSERT multiple data together?
if (toUpdate) {
for (Customer customerRec : customerRecList)
customerRecDAO.update(customerRec);
}
Yes you can use batching:
public <T> int saveInBatch(List<T> records, String sql, Function<T, MapSqlParameterSource> paramFn){
try{
MapSqlParameterSource[] params = records.stream().map(paramFn).toArray(MapSqlParameterSource[]::new);
int rowCount = jdbcTemplate.batchUpdate(sql, params);
return Arrays.stream(rowCount).sum();
}
catch(Exception e){
//exception handling
}
}
paramFn is a lambda of function such that you can map records to their values. example could be
(record)->{
return new MapSqlParameterSource("username" ,username),Integer.class);//just example
}
why we use MapSqlParameterSource
You can call saveInBatch in such a way that you pass smaller batches or customized batches of records. Suppose you have a million records then you may want to update only 200-400 records at a time so you can do something like below:
private <T> int saveRecords(List<T> records, String sql, Function<T, MapSqlParameterSource> paramFn) throws Exception{
return Lists.partition(records, 300).stream().map(batch-> saveInBatch(batch, sql, paramFn)).mapToInt(Integer::intValue).sum();
}
Note: above is not well optimized or streams are not used to their best but this is a working code I tried ages back :).

How to pass dynamic SQL statement to the JDBCIO connector in apache beam?

Apache beam provides the JDBCIO connector to connect to CloudSql postgreSQL. My job reads an event from pub/sub. The event body is as below:
tableName,
list<value>
I need to write to the table based on the table name that I get in from my message.
The JDBCIO has prepared statement which will let me parameterize the values in my insert query. But I need to generate the insert query dynamically based on the information present in the event.
pipeline
.apply(PubsubIO.readStrings().fromSubscription())
.apply(convertToKV())
.apply(JdbcIO.<List<String>>>write()
.withDataSourceConfiguration(JdbcIO.DataSourceConfiguration.create(
"com.mysql.jdbc.Driver", "jdbc:mysql://hostname:3306/mydb")
.withUsername("username")
.withPassword("password"))
.withStatement("insert into Person values(?, ?)")
.withPreparedStatementSetter(new JdbcIO.PreparedStatementSetter<KV<Integer, String>>() {
public void setParameters(KV<Integer, String> element, PreparedStatement query)
throws SQLException {
i=0
for each element in list
query.setInt(i, element.get(i);
i++;
}
})
);
I should be able to create the SQL statement dynamically based on the input event from the pcollection.
My select statement should be dynamically generated based on the list value and the table name. Please let me know whether we can do this or not.
Update:-
im trying to manually call the jdbc driver inside the parDo function but getting the below error.
No suitable driver found for jdbcURL.
Please let me know if im missing anyting:
#Setup
public void doAnyRequiredSetup() throws SQLException
{
LoggingContextUtil.installContext(loggingContext);
connection=DriverManager.getConnection(JdbcUrl,user,password);
statement=connection.createStatement();
if (LOGGER.isDebugEnabled()) {
LOGGER.debug("In doAnyRequiredSetup logging Context is now set and JDBC connection is .");
}
}
#SuppressWarnings("unchecked")
#ProcessElement
public void processElement(ProcessContext context)
{
JsonNode element=context.element();
try {
String query=formatQuery(baseQuery);
boolean result=statement.execute(query);
if(LOGGER.isDebugEnabled()) {
LOGGER.debug("Executed query : "+query+" and the result is "+ result);
}
} catch (IllegalArgumentException | SQLException e) {
ErrorMessage em = new ErrorMessage(element.toString(), "Insert Query Failed", e.getMessage());
context.output(ValidateTagHelper.FAILURE_TAG,em);
}
}
You can not have dynamic queries on JdbcIO based on the input elements.
The ParDo has to reset as you like, you can rewrite your ParDo in which you would call the JDBC driver manually.
If find this other workaround, you can split the input PColleciton into multiple outputs. That will work if your use case is limited to some predefined set of queries that you can choose from based on the input. This way you split the input into multiple PCollections and then attach differently configured IOs to each.
https://cloud.google.com/blog/products/gcp/guide-to-common-cloud-dataflow-use-case-patterns-part-1
You can try to read pubsub messages with attributes and in attributes, you can pass the table name and values in the form of Key-value pair.
PCollection<PubsubMessage> pubsubMessage = pipeline
.apply(PubsubIO.readMessagesWithAttributes().fromSubscription("")

Querying single database row using rxjava2

I am using rxjava2 for the first time on an Android project, and am doing SQL queries on a background thread.
However I am having trouble figuring out the best way to do a simple SQL query, and being able to handle the case where the record may or may not exist. Here is the code I am using:
public Observable<Record> createRecordObservable(int id) {
Callable<Record> callback = new Callable<Record>() {
#Override
public Record call() throws Exception {
// do the actual sql stuff, e.g.
// select * from Record where id = ?
return record;
}
};
return Observable.fromCallable(callback).subscribeOn(Schedulers.computation());
}
This works well when there is a record present. But in the case of a non-existent record matching the id, it treats it like an error. Apparently this is because rxjava2 doesn't allow the Callable to return a null.
Obviously I don't really want this. An error should be only if the database failed or something, whereas a empty result is perfectly valid. I read somewhere that one possible solution is wrapping Record in a Java 8 Optional, but my project is not Java 8, and anyway that solution seems a bit ugly.
This is surely such a common, everyday task that I'm sure there must be a simple and easy solution, but I couldn't find one so far. What is the recommended pattern to use here?
Your use case seems appropriate for the RxJava2 new Observable type Maybe, which emit 1 or 0 items.
Maybe.fromCallable will treat returned null as no items emitted.
You can see this discussion regarding nulls with RxJava2, I guess that there is no many choices but using Optional alike in other cases where you need nulls/empty values.
Thanks to #yosriz, I have it working with Maybe. Since I can't put code in comments, I'll post a complete answer here:
Instead of Observable, use Maybe like this:
public Maybe<Record> lookupRecord(int id) {
Callable<Record> callback = new Callable<Record>() {
#Override
public Record call() throws Exception {
// do the actual sql stuff, e.g.
// select * from Record where id = ?
return record;
}
};
return Maybe.fromCallable(callback).subscribeOn(Schedulers.computation());
}
The good thing is the returned record is allowed to be null. To detect which situation occurred in the subscriber, the code is like this:
lookupRecord(id)
.observeOn(AndroidSchedulers.mainThread())
.subscribe(new Consumer<Record>() {
#Override
public void accept(Record r) {
// record was loaded OK
}
}, new Consumer<Throwable>() {
#Override
public void accept(Throwable throwable) {
// there was an error
}
}, new Action() {
#Override
public void run() {
// there was an empty result
}
});

Guid values in Oracle with fluentnhibernate

I've only been using fluent nhibernate a few days and its been going fine until trying to deal with guid values and Oracle. I have read a good few posts on the subject but none that help me solve the problem I am seeing.
I am using Oracle 10g express edition.
I have a simple test table in oracle
CREATE TABLE test (Field RAW(16));
I have a simple class and interface for mapping to the table
public class Test : ITest
{
public virtual Guid Field { get; set; }
}
public interface ITest
{
Guid Field { get; set; }
}
Class map is simple
public class TestMap : ClassMap<Test>
{
public TestMap()
{
Id(x => x.Field);
}
}
I start trying to insert a simple easily recognised guid value
00112233445566778899AABBCCDDEEFF
Heres the code
var test = new Test {Field = new Guid("00112233445566778899AABBCCDDEEFF")};
// test.Field == 00112233445566778899AABBCCDDEEFF here.
session.Save(test);
// after save guid is changed, test.Field == 09a3f4eefebc4cdb8c239f5300edfd82
// this value is different for each run so I pressume nhibernate is assigning
// a value internally.
transaction.Commit();
IQuery query = session.CreateQuery("from Test");
// or
// IQuery query = session.CreateSQLQuery("select * from Test").AddEntity(typeof(Test));
var t in query.List<Test>().Single();
// t.Field == 8ef8a3b10e704e4dae5d9f5300e77098
// this value never changes between runs.
The value actually stored in the database differs each time also, for the run above it was
EEF4A309BCFEDB4C8C239F5300EDFD82
Truly confused....
Any help much appreciated.
EDIT: I always delete data from the table before each test run. Also using ADO directly works no problem.
EDIT: OK, my first problem was that even though I thought I was dropping the data from the table via SQL command line for oracle when I viewed the table via oracle UI it still had data and the first guid was as I should have expected 8ef8a3b10e704e4dae5d9f5300e77098.
Fnhibernate still appears to be altering the guid value on save. it alters it to the value it stores in the database but I'm still not sure why it is doing this or how\if I can control it.
If you intend on assigning the id yourself you will need to use a different id generator than the default which is Guid.comb. You should be using assigned instead. So your mapping would look something like this:
Id(x => x.Field).GeneratedBy.Assigned();
You can read more about id generators in the nhibernate documentation here:
http://www.nhforge.org/doc/nh/en/index.html#mapping-declaration-id-generator

Temporary tables in Linq -- Anyone see a problem with this?

In trying to solve:
Linq .Contains with large set causes TDS error
I think I've stumbled across a solution, and I'd like to see if it's a kosher way of approaching the problem.
(short summary) I'd like to linq-join against a list of record id's that aren't (wholly or at least easily) generated in SQL. It's a big list and frequently blows past the 2100 item limit for the TDS RPC call. So what I'd have done in SQL is thrown them in a temp table, and then joined against that when I needed them.
So I did the same in Linq.
In my MyDB.dbml file I added:
<Table Name="#temptab" Member="TempTabs">
<Type Name="TempTab">
<Column Name="recno" Type="System.Int32" DbType="Int NOT NULL"
IsPrimaryKey="true" CanBeNull="false" />
</Type>
</Table>
Opening the designer and closing it added the necessary entries there, although for completeness, I will quote from the MyDB.desginer.cs file:
[Table(Name="#temptab")]
public partial class TempTab : INotifyPropertyChanging, INotifyPropertyChanged
{
private static PropertyChangingEventArgs emptyChangingEventArgs = new PropertyChangingEventArgs(String.Empty);
private int _recno;
#region Extensibility Method Definitions
partial void OnLoaded();
partial void OnValidate(System.Data.Linq.ChangeAction action);
partial void OnCreated();
partial void OnrecnoChanging(int value);
partial void OnrecnoChanged();
#endregion
public TempTab()
{
OnCreated();
}
[Column(Storage="_recno", DbType="Int NOT NULL", IsPrimaryKey=true)]
public int recno
{
get
{
return this._recno;
}
set
{
if ((this._recno != value))
{
this.OnrecnoChanging(value);
this.SendPropertyChanging();
this._recno = value;
this.SendPropertyChanged("recno");
this.OnrecnoChanged();
}
}
}
public event PropertyChangingEventHandler PropertyChanging;
public event PropertyChangedEventHandler PropertyChanged;
protected virtual void SendPropertyChanging()
{
if ((this.PropertyChanging != null))
{
this.PropertyChanging(this, emptyChangingEventArgs);
}
}
protected virtual void SendPropertyChanged(String propertyName)
{
if ((this.PropertyChanged != null))
{
this.PropertyChanged(this, new PropertyChangedEventArgs(propertyName));
}
}
}
Then it simply became a matter of juggling around some things in the code. Where I'd normally have had:
MyDBDataContext mydb = new MyDBDataContext();
I had to get it to share its connection with a normal SqlConnection so that I could use the connection to create the temporary table. After that it seems quite usable.
string connstring = "Data Source.... etc..";
SqlConnection conn = new SqlConnection(connstring);
conn.Open();
SqlCommand cmd = new SqlCommand("create table #temptab " +
"(recno int primary key not null)", conn);
cmd.ExecuteNonQuery();
MyDBDataContext mydb = new MyDBDataContext(conn);
// Now insert some records (1 shown for example)
TempTab tt = new TempTab();
tt.recno = 1;
mydb.TempTabs.InsertOnSubmit(tt);
mydb.SubmitChanges();
And using it:
// Through normal SqlCommands, etc...
cmd = new SqlCommand("select top 1 * from #temptab", conn);
Object o = cmd.ExecuteScalar();
// Or through Linq
var t = from tx in mydb.TempTabs
from v in mydb.v_BigTables
where tx.recno == v.recno
select tx;
Does anyone see a problem with this approach as a general-purpose solution for using temporary tables in joins in Linq?
It solved my problem wonderfully, as now I can do a straightforward join in Linq instead of having to use .Contains().
Postscript:
The one problem I do have is that mixing Linq and regular SqlCommands on the table (where one is reading/writing and so is the other) can be hazardous. Always using SqlCommands to insert on the table, and then Linq commands to read it works out fine. Apparently, Linq caches results -- there's probably a way around it, but it wasn't obviousl.
I don't see a problem with using temporary tables to solve your problem. As far as mixing SqlCommands and LINQ, you are absolutely correct about the hazard factor. It's so easy to execute your SQL statements using a DataContext, I wouldn't even worry about the SqlCommand:
private string _ConnectionString = "<your connection string>";
public void CreateTempTable()
{
using (MyDBDataContext dc = new MyDBDataContext(_ConnectionString))
{
dc.ExecuteCommand("create table #temptab (recno int primary key not null)");
}
}
public void DropTempTable()
{
using (MyDBDataContext dc = new MyDBDataContext(_ConnectionString))
{
dc.ExecuteCommand("DROP TABLE #TEMPTAB");
}
}
public void YourMethod()
{
CreateTempTable();
using (MyDBDataContext dc = new MyDBDataContext(_ConnectionString))
{
...
... do whatever you want (within reason)
...
}
DropTempTable();
}
We have a similar situation, and while this works, the issue becomes that you aren't really dealing with Queryables, so you cannot easily use this "with" LINQ. This isn't a solution that works with method chains.
Our final solution was just to throw what we want in a stored procedure, and write selects in that procedure against the temp tables when we want those values. It is a compromise, but both are workarounds. At least with the stored proc the designer will generate the calling code for you, and you have a black boxed implementation so if you need to do further tuning you can do so strictly within the procedure, without a recompile.
In a perfect world, there will be some future support for writing Linq2Sql statements that allow you to dicate the use of temp tables within your queries, avoid the nasty sql IN statement for complex scenarios like this one.
As a "general-purpose solution", what if you code is run in more than one threads/apps? I think big-list solution is always related to the problem domain. It's better to use a regular table for the problem you are working on.
I once created a "generic" list table in database. The table was created with three columns: int, uniqueidentifier and varchar, along with other columns to manage each list. I was thinking: "it ought to be enough to handle many cases". But soon I received a task that requires a join be performed with a list on three integers. After that, I never tried to create "generic" list table again.
Also, it's better to create a SP to insert multiple items into the list table in each database call. You can easily insert ~2000 items in less than 2 db round trips. Of cause, depending on what you are doing, performance may do not matter.
EDIT: forgot it is a temporary table and temporary table is per connection, so my previous argument on multi-threads was not proper. But still, it is not a general solution, for enforcing the fixed schema.
Would the solution offered by Neil actually work? If its a temporary table, and each of the methods is creating and disposing its own data context, I dont think the temporary table would still be there after the connection was dropped.
Even if it was there, I think this would be an area where you are assuming some functionality of how queries and connections end up being rendered, and thats ome of the big issues with linq to sql - you just dont know what might happen downt he track as the engineers come up with better ways of doing things.
I'd do it in a stored proc. You can always return the result set into a pre-defined table if you wish.

Resources