Spring Data Cassandra and Map of Maps - spring

I have a Cassandra table defined like so:
create table foo (id int primary key, mapofmaps map<text, frozen<map<text, int>>>);
Into which I place some data:
insert into foo (id, mapofmaps) values (1, {'pets'; {'dog'; 42, 'cat'; 7}, 'foods': {'taco': 555, 'cake', '721'}});
I am then trying to use spring-data-cassandra to interact with it. I have a POJO:
#Table
public class Foo {
#PrimaryKey
private Integer id;
#Column("mapofmaps")
private Map<String, Map<String, Integer>> mapOfMaps;
// getters/setters omitted for brevity
}
And a Repository:
public interface FooRepository extends CassandraRepository<Foo> {
}
And then the following code to try and retrieve all the records as a simple test:
public Iterable<Foo> getAllFoos() {
return fooRepository.findAll();
}
Unfortunately this throws an exception. Less funky column types work OK, e.g. a List<String> and non-nested `Map type columns work fine. But this map of maps is not working for me.
Wondering if there is no support for this in spring-data-cassandra (though the exception appears to be in the DataStax code) or whether I just need to do something different with the POJO.
The exception thrown is as follows:
Caused by: java.lang.NullPointerException: null
at com.datastax.driver.core.TypeCodec$MapCodec.deserialize(TypeCodec.java:821)
at com.datastax.driver.core.TypeCodec$MapCodec.deserialize(TypeCodec.java:775)
at com.datastax.driver.core.ArrayBackedRow.getMap(ArrayBackedRow.java:299)
at org.springframework.data.cassandra.convert.ColumnReader.get(ColumnReader.java:53)

I dont know about the spring cassandra framework bit you can access the data using the datastax driver directly.
https://github.com/datastax/java-driver
I did some digging and the spring framework uses the java driver so there must be a cluster object already instantiated that you can leverage if the maps functionality you need is not exposed by spring-cassandra. Functionality could probably be added.

OK, what #phact said is not the answer is was looking for, but it did set me on the path to figuring things out.
As per my own comment on my original post, it appeared that this was a datastax driver issue, not a spring-data-cassandra issue. And that was borne out when I wrote a small test harness to try and query the problem table w/just the datastax client. I picked v2.1.7.1 of cassandra-driver-core and was able to query the table w/the map of maps fine.
Looking at the version of the driver that v1.3.0 of spring-data-cassandra brings in it's older, v2.0.4. A bit of maven dependency malarkey and I had my spring-data-cassandra project using a newer datastax driver and everything works fine.

Related

SpringBoot build query dynamically

I'm using SpringBoot 2.3.1 and Spring Data for accessing to PostgreSQL. I have the following simple controller:
#RestController
public class OrgsApiImpl implements OrgsApi {
#Autowired
Orgs repository;
#Override
public ResponseEntity<List<OrgEntity>> listOrgs(#Valid Optional<Integer> pageLimit,
#Valid Optional<String> pageCursor, #Valid Optional<List<String>> domainId,
#Valid Optional<List<String>> userId) {
List<OrgEntity> orgs;
if (domainId.isPresent() && userId.isPresent()) {
orgs = repository.findAllByDomainIdInAndUserIdIn(domainId.get(), userId.get());
} else if (domainId.isPresent) {
orgs = repository.findAllByDomainIdIn(domainId.get());
} else if (userId.isPresent()) {
orgs = repository.findAllByUserIdIn(userId.get());
} else {
orgs = findAll();
}
return ResponseEntity.ok(orgs);
}
}
And a simple JPA repository:
public interface Orgs extends JpaRepository<OrgEntity, String> {
List<OrgEntity> findAllByDomainIdIn(List<String> domainIds);
List<OrgEntity> findAllByUserIdIn(List<String> userIds);
List<OrgEntity> findAllByDomainIdInAndUserIdIn(List<String> domainIds, List<String> userIds);
}
The code above has several obvious issues:
If number of query parameters will grow, then this if is growing very fast and too hard to maintain it. Question: Is there any way to build query with dynamic number of parameters?
This code doesn't contain a mechanism to support cursor. Question: Is there any tool in Spring Data to support query based on cursor?
The second question can be easily get read if first question is answered.
Thank you in advance!
tl;dr
It's all in the reference documentation.
Details
Spring Data modules pretty broadly support Querydsl to build dynamic queries as documented in the reference documentation. For Spring Data JPA in particular, there's also support for Specifications on top of the JPA Criteria API. For simple permutations, query by example might be an option, too.
As for the second question, Spring Data repositories support streaming over results. That said, assuming you'd like to do this for performance reasons, JPA might not be the best fit in the first place, as it'll still keep processed items around due to its entity lifecycle model. If it's just about access subsets of the results page by page or slice by slice, that's supported, too.
For even more efficient streaming over large data sets, it's advisable to resort to plain SQL either via jOOQ (which can be used with any Spring Data module supporting relational databases), Spring Data JDBC or even Spring Data R2DBC if reactive programming is an option.
You can use spring-dynamic-jpa library to write a query template
The query template will be built into different query strings before execution depending on your parameters when you invoke the method.

spring boot application: jpa query returning old data

We have created a spring boot project using 1.3.5 version. Our application interacts with Mysql database.
We have created a set of jpa-repositories in which we are using findAll, findOne and other custom queries methods.
We are facing a issue which is occurring randomly . Following are the steps to reproduce it:
Fire a read query on db using spring-boot application.
Now Manually change the data in Mysql using mysql-console of the records which were returned by above read query.
Again fire the same read query using application.
After step 3 , we should have received the modified results of step 2, but what we got was the data before modification.
Now if we again fire the read query using application, It gives us correct values.
This issue occurs randomly. We are not using any kind of cache in our application.
While debugging I found out that jpa-repository code is infact calling mysql and it also fetches the latest result ,but when this call return back to our application service , surprisingly the return value has the old data.
Please help us identify the possible cause of it.
JPA/Datasource config:
spring.datasource.driverClassName=com.mysql.jdbc.Driver
spring.datasource.url=jdbc:mysql://localhost:3306/dbname?autoReconnect=true
spring.datasource.username=root
spring.datasource.password=xxx
spring.jpa.database-platform=org.hibernate.dialect.MySQL5Dialect
spring.datasource.max-wait=15000
spring.datasource.max-active=100
spring.datasource.max-idle=20
spring.datasource.test-on-borrow=true
spring.datasource.remove-abandoned=true
spring.datasource.remove-abandoned-timeout=300
spring.datasource.default-auto-commit=false
spring.datasource.validation-query=SELECT 1
spring.datasource.validation-interval=30000
hibernate.dialect=org.hibernate.dialect.MySQL5Dialect
hibernate.show_sql=false
hibernate.hbm2ddl.auto=update
Service Method:
#Override
#Transactional
public List<Event> getAllEvent() {
return eventRepository.findAll();
}
JPARepository:
public interface EventRepository extends JpaRepository<Event, Long> {
List<Event> findAll();
}
#Cacheable(false)
example:
#Entity
#Table(name="table_name")
#Cacheable(false)
public class EntityName {
// ...
}
This might be because of some "DIRTY READS". Faced a similar issue, Try using Transactional Locks especially "Repeatable reads" which could probably avoid this problem. Correct me if I'm wrong.
you can use
entityManager.refresh(entity)
to get latest values of entity
You can use:
#Autowired
private EntityManager entityManager;
then before querying the same entity another time:
entityManager.clear();
then call the query

Hibernate does not create table?

I am working on a spring + hibernate based project. Actually, A project is given to me with Simple Spring Web Maven Structure (Spring Tool Suit as IDE).
I have successfully imported the project into my STS IDE and have also changed some of hibernate configuration properties so that application will be able to talk to my local PostGreSQL server.
The changes that I have made are as given below:
jdbc.driverClassName=org.postgresql.Driver
jdbc.dialect=org.hibernate.dialect.PostgreSQLDialect
jdbc.databaseurl=jdbc:postgresql://localhost:5432/schema
jdbc.username=username
jdbc.password=password
The hibernate.hbm2ddl.auto property is already set to update so I didn't change that.
Then I simply deploy my project to Pivotal Server and hibernate get executed and creates around 36 tables inside my DB schema. Looks fine !
My Problem: In my hibernate.cfg.XML file total 100 Java model classes are mapped and they also have #Entity annotation. Then, why hibernate is not creating all the remaining tables?
Due to some cause I can't post the code of any of the model class here, I have searched a lot about the problem and applied many different solutions but didn't worked. Could someone please let me know that what could be the reasons that hibernate can react like this?
One of my model class which is not created in my DB.
#Entity
#Table(name = "fare_master")
public class FareMaster {
#Id
#Column(name = "fare_id")
#GeneratedValue
private int fareId;
#Column(name = "base_fare_amount")
private double baseFareAmount;
public int getFareId() {
return fareId;
}
public void setFareId(int fareId) {
this.fareId = fareId;
}
public double getBaseFareAmount() {
return baseFareAmount;
}
public void setBaseFareAmount(double baseFareAmount) {
this.baseFareAmount = baseFareAmount;
}
}
And mapping of the class is as follows
<mapping class="com.mypackage.model.FareMaster" />
Change hibernate.hbm2ddl.auto property to create-drop if you want to create tables, setting it to update will just allow you to update existing tables in your DB.
And check your log file to catch errors.
After a very long time efforts, I came to a conclusion that for this problem or similar type of other problems we should always follow the basic rules
1.) First, Be sure about your problem that means the Exact Issue causing this type of error.
2.) And to find out the exact issue use Logger in your application and you will definitely save lot of time.
In my case this is happening becasue I have switched my DB from MySql to PostGreSql and some of syntax in columnDefinition( a parameterin in #Column Annotation) was not compatible with my new DB. As I used again MySql everything worked fine.
If you have a schema.sql file under your projects, hibernate does not create tables.
Please remove it and try again.

neo4j spring use existing data

I have started to use SDN 3.0.0 M1 with Neo4j 2.0 (via rest interface) and I want use an existing graph.db with existing datas.
I have no problem to find node created through SDN via hrRepository.save(myObject); but I can't fetch any existing node (not created through SDN), via hrRepository.findAll(); or any other method, despite I have manually added a property __type__ in this existing nodes.
I use a very simple repository to test that :
#Component
public interface HrRepository extends GraphRepository<Hr> {
Hr findByName(String name);
#Query("match (hr:hr) return hr")
EndResult <Hr> GetAllHrByLabels();
}
And the named query GetAllHrByLabels work perfectly.
Is an existing way to use standard methods (findAll() , findByName()) on existing datas without redefine Cypher query ?
I recently ran into the same problem when upgrading from SDN 2.x to 3.0. I was able to get it working by first following the steps in this article: http://maxdemarzi.com/2013/06/26/neo4j-2-0-is-coming/ to create and enable Neo4j Labels on the existing data.
From there, though, I had to get things working for SDN 3. As you encountered, to do this, you need to set the metadata correctly. Here's how to do that:
Consider a #NodeEntity called Person, that inherits from AbstractNodeEntity (imports and extraneous code removed for brevity):
AbstractNodeEntity:
#NodeEntity
public abstract class AbstractNodeEntity {
#GraphId private Long id;
}
Person:
#NodeEntity
#TypeAlias("Person") // <== This line added for SDN 3.0
public class Person extends AbstractNodeEntity {
public String name;
}
As you know, in SDN 2.x, a __type__ property is created automatically that stores the class name used by SDN to instantiate the node entity when it's read from Neo4j. This is still true, although in SDN 3.0 it's now specified using the #TypeAlias annotation, as seen in the example above. SDN 3.0 also adds new metadata in the form of Neo4j Labels representing the class hierarchy, where the node's class is prepended with an underscore (_).
For existing data, you can add these labels In Cypher (I just used the new web-based Browser utilty in Neo4j 2.0.1) like this:
MATCH (n {__type__:'Person'}) SET n:`_Person`:`AbstractNodeEntity`;
Just wash/rinse/repeat for other #NodeEntity types you have.
There is also a Neo4j Label that gets created called SDN_LABEL_STRATEGY but it isn't applied to any nodes, at least in my data. SDN 3 must have created it automatically, as I didn't do so manually.
Hope this helps...
-Chris
Using SDN over REST is probably not the best idea performance-wise. Just that you know.
Data not created with SDN won't have the necessary meta information.
You will have to iterate over the nodes manually and use
template.postEntityCreation(Node,Class);
on each of them to add the type information. Where class is your SDN annotated entity class.
something like:
for (Node n : template.query("match(n) where n.type = 'Hr' return n").to(Node.class))
template.postEntityCreation(n,Hr.class);

Why do I get errors in Datanucleus when spatial extensions are added

I am attempting to use Datanucleus with the datanucleus-spatial plugin. I am using annotations for my mappings. I'm am attempting with both PostGIS and Oracle spatial. I am going back to the tutorials from datanucleus. What I'm experiencing doesn't make any sense. My development environment is Netbeans 7.x (I've attempted 7.0, 7.2, and 7.3) with MAven 2.2.1. Using the Position class in Datanucleus's tutorial found at http://www.datanucleus.org/products/datanucleus/jdo/guides/spatial_tutorial.html, I find that if I do not include the datanucleus-spatial plugin in my Maven dependencies, it connects to PostGIS or Oracle no problem, and commits the data, the spatial data being stored as a blob (I expected this since not spatial plugins are present). Using PostGIS, the tutorial works just fine.
I modify the Position class by replacing the org.postgis.Point class with oracle.spatial.geometry.JGeometry and point my connection to a Oracle server. Without spatial, again the point is stored as a blob. With spatial I get the following exception:
java.lang.ClassCastException: org.datanucleus.store.rdbms.datasource.dbcp.PoolingDataSource$PoolGuardConnectionWrapper cannot be cast to oracle.jdbc.OracleConnection
The modified class looks like the following:
#PersistenceCapable
public class Position
{
#PrimaryKey
private String name;
#Persistent
private JGeometry point;
public Position(String name, double x, double y)
{
this(name, JGeometry.createPoint(new double[]{x, y}, 2, 4326));
}
public Position(String name, JGeometry point)
{
this.name = name;
this.point = point;
}
public String getName()
{
return name;
}
public JGeometry getPoint()
{
return point;
}
#Override
public String toString()
{
return "[name] "+ name + " [point] "+point;
}
}
Is there something I'm missing in the fabulous world of DataNucleus Spatial? Why does it fail whenever spatial is added? Do I need the JDO xml file even though I'm annotating? Are there annotations not presented in the tutorial? If the jdo xml file shown in the tutorial is required and the reason I'm getting these errors, where do I put it? I'm currently 3 weeks behind on my project and am about to switch to Hibernate if this is not fixed soon.
You don't present a stack trace, so impossible to tell other than it is DBCP causing the problem, and you could easily enough use any of the other connection pools that are supported. If some Oracle "Connection" object cannot be cast to some other JDBC connection then maybe the Oracle JDBC driver is for a different version of JDBC than what this version of DBCP is ? (and some versions of JDBC break backwards compatibility). No info in the post is provided to confirm or rule that out (the log tells you some of that). As already said, there are ample other connection pools available.
The DN Spatial Tutorial is self-contained, and has Download and GitHub links, and that defines where a JDO XML file would go if using it. The tutorial, as provided, works
Finally, this may be worth a read ...
In order to avoid cannot be cast to oracle.jdbc.OracleConnection error I suggest you to use datanucleus-geospatial 3.2.7 version that can be found on central maven repository.

Resources