We have multiple oracle schema which we want to import in to somekind of inmemory db so that when we run our integration test we can use that db and run our tests faster.
Is there anyway we this can be achieved using something like HSQL db. We are using spring framework and it does support inmemory db.
Any link to some resource would be highly appreciated.
Try force full database caching mode, if you're using 12.1.0.2. It's not exactly the same as a full in-memory database, but it should be closer.
alter database force full database caching;
In-memory database performance is over-rated anyway. Oracle's "old-fashioned" asynchronous IO and caching often work just fine. For example, in this question, accessing a temporary table (which is stored on disk) runs faster than an equivalent solution using in-memory data structures. And I've seen a small Oracle database handle petabytes of IO with the "boring" old buffer cache.
Or when you say "run our tests faster", are you referring to a more agile database; one that can be controlled by an individual, instead of the typical monolithic Oracle database installed on a server? I see that issue a lot, and there's no technical reason why Oracle can't be installed on your desktop. But that can be a tough cultural battle.
Yes, you can use HSQLDB for the purpose of unit testing - see this post for more information on how to integrate with Spring.
Also, see this list as a good starting point for different usages of HSQLDB.
Related
I'm working on a simple task of adding a new table to an existing SQL DB and wiring it into a SpringBoot API with SpringData.
I would typically start by defining the DB table directly, creating PK and FK, etc and then creating the Java bean that represents it, but am curious about using the SpringData initialization feature.
I am wondering when and where Spring Data + JPAs schema generation and DB initialization may be useful. There are many tutorials on how it can be implemented, but when and why are not as clear to me.
For example:
Should I convert my existing lower environment DBs (hand coded) to initialized automatically? If so, by dropping the existing tables and allowing the App to execute DDL?
Should this feature be relied on at all in production envrionment?
Should generation or initialization be run only once? Some tutorial mention this process running continually, but why would you choose to lose data that often?
What is the purpose of the drop-and-create jpa action? Why would
you ever want to drop tables? How are things like UAT test data handled?
My two cents on these topics:
Most people may say that you should not rely on automated database creation because it is a core concept of your application and you might want to take over the task so that you can lnowmfor sure what is really happening. I tend to agree with them. Unless it is a POC os something not production critical, I would prefer to define the database details myself.
In my opinion no.
This might be ok on environments that are non-productive. Or on early and exploratory developments. Definetely not on production.
On a POC or on early and exploratory developments this is ok. In any other case I see this being useful. Test data might also be part of the initial setup of the database. Spring allows you to do that by defining an SQL script inserting data to the database on startup.
Bottomline in my opinion you should not rely on this feature on Production. Instead you might want to take a look at liquibase or flyway (nice article comparing both https://dzone.com/articles/flyway-vs-liquibase), which are fully fledged database migration tools on which you can rely even on production.
My opinion in short:
No, don't rely on Auto DDL. It can be a handy feature in development but should never be used in production. And be careful, it will change your database whenever you change something on your entities.
But, and this is why I answer, there is a possibility to have hibernate write the SQL in a file instead of executing it. This gives you the ability to make use of the feature but still control how your database is changed. I frequently use this to generate scripts I then use as blueprint for my own liquibase migration scripts.
This way you can initially implement an entity in the code and run the application, which generates the hibernate sql file containing the create table statement for your newly added entity. Now you don't have to write all those column names and types for the database table yourself.
To achieve this, add following properties to your application.properties:
spring.jpa.hibernate.ddl-auto=none
spring.jpa.properties.javax.persistence.schema-generation.scripts.create-target=build/generated_scripts/hibernate_schema.sql
spring.jpa.properties.javax.persistence.schema-generation.scripts.action=create
This will generate the SQL script in your project folder within build/generated_scripts/hibernate_schema.sql
I know this is not exactly what you were asking for but I thought this could be a nice hint on how to use Auto DDL in a safer way.
In one of the use cases in my application there is a requirement to publish neo4j transaction data to oracle database in real time. I did google on it, but couldn't find a tool or plug-in which can help. Everywhere on internet talks about rdbms to neo4j sync. So I am planning to do this by manually invoking jdbc commands.
Can you please suggest something?
Had to write my own jdbc code.
I have a Java Spring project which does a lot of database reading. The database I have available is a shared postgresql database on a remote machine and it's quite slow to get data from it, especially when I'm doing a lot of back-and-forth.
When I'm doing local testing and development, I use the embedded H2 in-memory database. Things easily go 100 times faster.
This made me wonder. Is there a way to use the embedded H2 in-memory database so that:
Data manipulation (INSERT, UPDATE, DELETE) is ("eventually") replicated to the PostgreSQL database
Upon start/restart of the Spring project, the H2 database is automatically filled with the data of the PostgreSQL server
This would allow me to use the fast H2 to provide a seamless user experience while at the same time also storing the data in a longer-term data storage. In a way, I'd be using the H2 as a fully cached version of the PostgreSQL database.
Is there a known / standardized way to approach this? Google for H2 and PostgreSQL and replication gives me results on transitioning from one to the other, but I'm not finding much as to using one as a sort of cache for the other.
I remain on the lookout for a Spring / JPA / Hibernate focused answer, but if none comes: I may have found an alternative domain to investigate. Dedicated database replication software might be able to manage this. Specifically, I've discovered SymmetricDS, which seems (I've only given the documentation a cursory glance) like it might be able to be embedded into my Spring application, do an initial load of my embedded in-memory H2 database on startup and then trickle feed data changes to the remote database.
(I'm not in any way affiliated with SymmetricDS, it just looks like it might be a solution.)
My delayed job has something to do with exporting slightly edited version of most of the tables in the app's database, and while doing so, it is critical that none of the current data is being edited.
Is it possible to lock the entire database while running this delayed job?
More Information:
The database to be exported is in PostgreSQL, Heroku's postgresql database, to be more specific.
The flow is something like (all below should be done automatically by the code):
site would be put in maintenance mode,
freeze then export the database, then
when exporting is complete, re-activate the site back
Given there is not a lot of information with your question, I am going to answer you as best I can.
1) What is the database type and model? Is it a standalone DB like MS Access or Informix SE?
2) If not a standalone engine, does this database support replication. I used to work a lot with MS SQL Server, and replication had implications while the database was live and being edited. That is the implications were whether edited data was replicated. In this case, consult the docs. Is it an option to use replication to preserve the current database?
3) What kind of task is this? It sounds like maintenance. Our Informix SE databases lock when being imported or exported. On the production server, it is my job to make sure no local server applications are trying to access the locked DB, and that our external payments web site cannot interfere while the db is locked.
4) If this is a production site that is not in maintenance mode, then I suggest you probably do not want to lock an entire database.
I am sorry for not answering your question directly, but more information is needed like are you asking if this can be done from the Ruby DB interface on some model of db.
I have tried SQLite in Java, but the speed is slow due to the JDBC driver. Then I tried HSQLDB and thought the speed is good, but I cannot find a good management tool for HSQLDB such as phpMyAdmin for MySQL or SQLite Manager for SQLite.
I'd like to use the manager tool to prepare the test data for unit tests, or use the manager tool to navigate the data after doing some small experiments.
Is there any good tool?
Here are a couple other suggestions you might checkout:
Squirrel SQL http://squirrel-sql.sourceforge.net/
Execute Query http://executequery.org/
Razor SQL (paid) http://www.razorsql.com/
Razor has the best feature set, but is paid. The others are good at different things and worth checking into.
This would only have meaning if you are running in HSQLDB server mode. If you are running in memory or file mode, then you either can't access the DB from another process or doing so would lock it.
In Server mode you could use any universal client. JDBC driver is the hsqldb.jar itself.
Actually HSQL brings its own management tool (which is not super). See http://hsqldb.org/doc/guide/apf.html
I've used Squirrel SQL. It's a universal client for any JDBC database.
See: http://squirrel-sql.sourceforge.net/