I'm trying to use flyway to handle multi schemas with identical structures.
I want to centralize the metadatas(infos in version_schema) in a single
schema,
so that it is easy to see and check the metadata which would have been scattered in different schemas.
As the documentation suggests I tried to use the setSchemas() method and listed the schemas I wanted to manage.
Problem is that it just migrates to the first schema on the list and disregards the rest.
The code I used is as below.
Flyway flyway = new Flyway();
flyway.setDataSource(-- Database Info..);
flyway.setSchemas("test1", "test2");
for (int i = 1; i <= 2; i++) {
flyway.migrate();
}
As an alternative I've set the name of each of the schema in the loop as below.
Flyway flyway = new Flyway();
flyway.setDataSource(-- Database Info..);
for (int i = 1; i <= 2; i++) {
flyway.setSchemas("test" + i);
flyway.migrate();
}
This works but the metadata is located in each schema.
Is there a way to achieve my goal using the flyway APIs?
I've looked into other questions and saw that I have to prefix the object names accordingly.
Can anyone give me a sample code to implement the solution?
Related
I have a Spring Boot application where I use QueryDSL for dynamic queries.
Now the results should be exported as a csv file.
The model is an Order which contains products. The products should be included in the csv file.
However, as there are many thousand orders with millions of products this should not be loaded into memory at once.
However, solutions proposed by Hibernate (ScrollableResults) and streams are not supported by QueryDSL.
How can this be achieved while still using QueryDSL (to avoid duplication of filtering logic)?
One workaround to this problem is to keep iterating using offset and limit.
Something like:
long limit = 100;
long lastLimitUsed = 0;
List<MyEntity> entities = new JPAQuery<>(em)
.from(QMyEntity.entity)
.limit(limit)
.offset(lastLimitUsed)
.fetch();
lastLimitUsed += limit;
With that approach you can fetch smaller chunks of data. It is important to analyze if the limit and offset field will work well with your query. There are situations where even if you use limit and offset you will end up making a full scan on the tables involved on the query. If that happens you will face a performance problem instead of a memory one.
Use JPAQueryFactory
// com.querydsl.jpa.impl.JPAQueryFactory
JPAQueryFactory jpaFctory = new JPAQueryFactory(entityManager);
//
Expression<MyEntity> select = QMyEntity.myEntity;
EntityPath<MyEntity> path = QMyEntity.myEntity;
Stream stream = this.jpaQueryFactory
.select(select)
.from(entityPath)
.where(cond)
.createQuery() // get jpa query
.getResultStream();
// do something
stream.close();
In my integration test, I am creating an H2 database with two schemas, A and B. A is set as the default schema, like it is in the normal setup for the application when it is running with a PostgreSQL database. During the integration test, I am starting both the H2 database and an embedded Tomcat server and execute the SQL files to initialise the database via Liquibase.
All models that are connected to tables in schema A are annotated with #Table("tablename"), whereas models for schema B are annotated with #Table("B.tablename"). When I call a REST endpoint in the embedded server, activeJDBC warns me:
WARN org.javalite.activejdbc.Registry - Failed to retrieve metadata for table: 'B.tablename'. Are you sure this table exists? For some databases table names are case sensitive.
When I then try to access a table in schema B in my Java code, activeJDBC throws the following exception (which is expected after the previous warning):
org.javalite.activejdbc.InitException: Failed to find table: B.tablename
at org.javalite.activejdbc.MetaModel.getAttributeNames(MetaModel.java:248)
at org.javalite.activejdbc.Model.hydrate(Model.java:207)
at org.javalite.activejdbc.ModelDelegate.instance(ModelDelegate.java:247)
at org.javalite.activejdbc.ModelDelegate.instance(ModelDelegate.java:241)
...
Accessing tables in schema A works as expected.
I am sure that the tables in schema B are actually created and contain data, because on top of Liquibase log entries for executing the files I can also access the database directly and get the table content as a result:
Initialisation of Database and server:
private String H2_CONNECTION_STRING = "jdbc:h2:mem:testdb;INIT=CREATE SCHEMA IF NOT EXISTS A\\;SET SCHEMA A\\;CREATE SCHEMA IF NOT EXISTS B\\;";
#Before
public void initializeDatabase() {
connection = DriverManager.getConnection(H2_CONNECTION_STRING);
Statement stat = connection.createStatement();
stat.execute("GRANT ALTER ANY SCHEMA TO PUBLIC");
LiquibaseInitialisation.initH2(H2_CONNECTION_STRING); // execute SQL scripts
EmbeddedServer.startServer();
}
Query to print content of B.tablename:
Logger Log = LoggerFactory.getLogger("test");
Statement stat = connection.createStatement();
stat.execute("SELECT * FROM B.tablename;");
connection.commit();
resultSet = stat4.getResultSet();
rsmd = resultSet.getMetaData();
columnsNumber = rsmd.getColumnCount();
while (resultSet.next()) {
builder = new StringBuilder();
for (int i = 1; i <= columnsNumber; i++) {
builder.append(resultSet.getString(i));
builder.append(" ");
}
Log.info(builder.toString());
}
This produces the desired output of the content of B.tablename.
The question is this: Why doesn't activeJDBC find the tables in schema B in the H2 database when it's clearly present, but works flawlessly in PostgreSQL? Am I missing something with regards to schemas in H2 or activeJDBC?
Please, log this as an issue: https://github.com/javalite/activejdbc/issues and provide full instructions to replicate this condition. Best if you can provide a small project.
Considering a Spring Boot, neo4j environment with Spring-Data-neo4j-4 I want to make a delete and get an error message when it fails to delete.
My problem is since the Repository.delete() returns void I have no ideia if the delete modified anything or not.
First question: is there any way to get the last query affected lines? for example in plsql I could do SQL%ROWCOUNT
So anyway, I tried the following code:
public void deletesomething(Long somethingId) {
somethingRepository.delete(getExistingsomething(somethingId).getId());
}
private something getExistingsomething(Long somethingId, int depth) {
return Optional.ofNullable(somethingRepository.findOne(somethingId, depth))
.orElseThrow(() -> new somethingNotFoundException(somethingId));
}
In the code above I query the database to check if the value exist before I delete it.
Second question: do you recommend any different approach?
So now, just to add some complexity, I have a cluster database and db1 can only Create, Update and Delete, and db2 and db3 can only Read (this is ensured by the cluster sockets). db2 and db3 will receive the data from db1 from the replication process.
For what I seen so far replication can take up to 90s and that means that up to 90s the database will have a different state.
Looking again to the code above:
public void deletesomething(Long somethingId) {
somethingRepository.delete(getExistingsomething(somethingId).getId());
}
in debug that means:
getExistingsomething(somethingId).getId() // will hit db2
somethingRepository.delete(...) // will hit db1
and so if replication has not inserted the value in db2 this code wil throw the exception.
the second question is: without changing those sockets is there any way for me to delete and give the correct response?
This is not currently supported in Spring Data Neo4j, if you wish please open a feature request.
In the meantime, perhaps the easiest work around is to fall down to the OGM level of abstraction.
Create a class that is injected with org.neo4j.ogm.session.Session
Use the following method on Session
Example: (example is in Kotlin, which was on hand)
fun deleteProfilesByColor(color : String)
{
var query = """
MATCH (n:Profile {color: {color}})
DETACH DELETE n;
"""
val params = mutableMapOf(
"color" to color
)
val result = session.query(query, params)
val statistics = result.queryStatistics() //Use these!
}
I have a large java application that is configured to use JPA and Hibernate. It is also supposedly configured to use ehcaching for both entity and query caching. However, I have sql logging turned on and no entities are being cached. All of the entity queries are happening on every request.
How can I determine at runtime if it is even running ehcache and whether it thinks an entity should be cacheable?
I didn't write this app so I'm stuck a bit here.
It uses declarations for the caching on the classes.
It is correctly using all the other declarations for Hibernate to perform the read/write operations.
Try something like this:
List<CacheManager> tempManagers = CacheManager.ALL_CACHE_MANAGERS;
System.out.println("# of CMs : " + tempManagers.size());
for (CacheManager tempCM : tempManagers) {
System.out.println("Got: " + tempCM.getName());
String[] cacheNames = tempCM.getCacheNames();
for (int i = 0; i < cacheNames.length; i++) {
String cacheName = cacheNames[i];
System.out.println(cacheName+" - "+ tempCM.getEhcache(cacheName).getStatistics().toString());
}
}
The short answer - a debugger.
Put a breakpoint where you load an entity and follow it down the stack. See if it ever even attempts to get the object from EHCache. Also, check to see if it tries to put it in the cache after it fetches it from the DB.
I implemented this in that way:
public boolean areCachesDefined() {
return this.cacheManagers.stream()
.anyMatch(cacheManager -> cacheManager.getCacheNames().iterator().hasNext());
}
where cacheManagers is a collection with cache handlers per cache type (for example ehcache)
Solution by #mglauche is pretty good. Additionally during startup you can search if your logs are printing following :
o.s.c.ehcache.EhCacheManagerFactoyBean : Initializing EhCache CacheManager
In our application, we have various objects set to lazy false based on the application needs. However, in one of the use case we want to ignore all the lazy settings within the HBM files, and get ONLY the target object.
So the question is: is there a way to specify in the HQL to fetch ONLY the target object irrespective of the HBM settings?
~Sri
you can use setFetchMode on the Criteria before it is executed to override the HBM file setting
Sorry, not sure if you understood your question.
If you have to implement it for a specific class, you can just use SetFetchMode.
var query = session.CreateCriteria(typeof(MyClass));
query.SetFetchMode("PropertyA", FetchMode.Select);
query.SetFetchMode("PropertyB", FetchMode.Select);
Note: for many-to-one references the entity class itself must be mapped with lazy=true. If not, NHibernate doesn't even create a proxy class for it.
This is the answer if you want to lazy load the type in a generic, type-independent way:
You could find them with the metadata and add fetch modes to a criteria
I didn't try it, but I would start with the following code:
var meta = sessionfactory.GetClassMetaData(typeof(MyClass));
var query = session.CreateCriteria(typeof(MyClass));
for(int index = 0; index < meta.PropertyType.Length; index++)
{
if (meta.PropertyType[index] == NHibernateUtil.Entity)
{
query.SetFetchMode(meta.PropertyNames[index], FetchMode.Select);
}
}
This doesn't include collections. They are probably found with factory.GetCollectionMetadata(roleName), but you need to find out the roleName.