In Spring, I was switching over from MySQL to use MongoDB instead.
In MySQL, I can have an in-memory database (H2) and an actual locally stored database in MySQL. Is this not possible with MongoDB? If so, how? Is Spring Data MongoDB an in-memory one or locally stored?
yes it's possible, try to check this embedded one: https://github.com/flapdoodle-oss/de.flapdoodle.embed.mongo
example of usage: https://www.baeldung.com/spring-boot-embedded-mongodb
I used Fongo a few years ago:
https://github.com/fakemongo/fongo
Related
I have written a spring batch solution which currently uses the embedded H2 in-memory database.
The read and write operations uses SOLR Cloud API calls.
Ideally, we dont want to introduce an proper relational database as a job repo database, for the read-write batch operation.
I read that H2 in-memory databases are best used for Dev and Test in spring batch.
Does anyone have experience of using this H2 database in spring batch on a proper live environment dealing with millions of records in the batch processing, but batch job will ran only once a day at most?
If H2 is not stable for prod, I might have to ditch spring batch OMG, or anyother alternatives?
Open to any ideas or references.
Thanks in advance.
H2 is a light-weight Java database, as you mentioned yourself, that it is ideal for dev testing !
When considering production, you might be missing on lot of features which a RDBMS , NoSQL databases provide!
For e.g. Replication, memory and performance optimizations etc.
If frequent reads and writes are concerned and you don't want RDBMS, you may choose MongoDB or Couchbase to manipulate records , they are fast too !
So considering Millions of records, I don't think H2 would be a good choice for production databases
A similar article might throw some light & help you decide !
Are there any reasons why h2 database shouldn't be used in production?
I'm currently writing an integration test with spring boot and a Postgis connection. In my original application a query uses the <-> operator. For my tests I used a h2 in-memory database with the h2gis extension. Unfortunately, the <-> is not recognized and throws a syntax error. Do you have any ideas how to do this with an in-memory database or is there only the chance to run a docker container with a proper postgis database running?
Thank you!
Even if it where possible to run a database similar enough to Postgres, I'd recommend against it.
We now have Testcontainers and can therefore easily start any* database in docker container from our tests. This is preferable, because you are using the actual database you'll also see in production.
any*: Some of the commercial variants are either huge or take a long time to start up, but Postgres works great.
Is it possible to replace Apache Nifi H2 database to some other DB (like Postgres or MySql)
Locked at ApPache Nifi documentations and configurations but couldn't find any
It is not possible. The database is meant to be something you don't really need to know much about, it is just another data store on disk like all the other repositories (flow file, content, provenance), it just so happens to be backed by an embedded DB.
I try to write a small web application with a restfull frontend to manage a little amount of data (round about 30 datasets). I want to create a PDF file from the datasets (using iText, but this is not the Problem). I search now a small database, which I can embed in my application an which persists the data somewhere on my Harddisc (if possible no Client / Server database), but I find no example / tutorial for this. All tutorial I found using a database in in-Memory mode, which is not what I need. Is there somewhere a nice tutorial helping me? Which database would you sugest to use in my Situation?
Thanks for your help and
Kind regards,
Andreas Grund
You can use H2, HSQL and Derby databases for embedded database. For example h2 database datasource-url like below :
jdbc:h2:~/Desktop/Database/test;DB_CLOSE_ON_EXIT=FALSE;
And in spring boot you can easily do it ,if you read this document Spring Boot-Embedded Databases
I used internal persistence manager based on derby DB, and filesystem repository.
Now it around 1.5 million files and 3 TB in repo, and around 6 million records in derby DB.
I think is too much for that DB, because I have extremely slowing down on performance last time.
so I want to change persistence manager to something like MySQL or Oracle.
What is the best way to export data from a Apache Jackrabbit derby DB and import to MySQL?
How can I do this in the easiest and fastest way?
How to migrate to a version of Jackrabbit or a new persistence manager is described at the Backup and Migration page.
In my experience, MySQL or Oracle are not actually faster, is Derby is embedded (in-process). MySQL and Oracle are remote, so for each request there is a network roundtrip.
Instead, what you could do is use a higher bundle cache size and/or a higher database cache size.