I am building an application with Hibernate Search 4.5.1 and Spring 4.0.5.RELEASE. I am trying to index the following class:
#Entity
#Indexed
#Analyzer(impl= org.apache.lucene.analysis.standard.StandardAnalyzer.class)
#Table(name="SONG")
#XmlRootElement(name="song")
public class Song
{
#Id
#GeneratedValue(strategy = GenerationType.AUTO)
#Column(name = "ID", updatable = false, nullable = false)
private Long id;
#Field(store = Store.YES)
#Column(name="NAME", length=255)
private String name;
#Field(store = Store.YES)
#Column(name="ALBUM", length=255)
private String album;
#Field(store = Store.YES)
#Column(name="ARTIST", length=255)
private String artist;
#NotNull
#Column(name="PATH", length=255)
private String path;
#NotNull
#Column(name="PATH_COVER", length=255)
private String cover;
#NotNull
#Column(name="LAST_VOTE")
private Date date;
#Field(store = Store.YES)
#NotNull
#Column(name="N_VOTES")
private int nvotes;
#NotNull
#Column(name="ACTIVE", nullable=false, columnDefinition="TINYINT(1) default 0")
private boolean active;
#OneToOne(fetch=FetchType.LAZY)
#JoinColumn(name="IMAGE_ID",insertable=true,updatable=true,nullable=false,unique=false)
private Image image;
#IndexedEmbedded
#ManyToOne(fetch = FetchType.LAZY)
#JoinColumn(name = "PLAYLIST_ID", nullable = false)
private PlayList playList;
#OneToMany(mappedBy = "song")
private Set<UserVotes> userServices = new HashSet<UserVotes>();
I am building a junit test case which looks like this:
#RunWith(SpringJUnit4ClassRunner.class)
#ContextConfiguration(locations = {"classpath:jukebox-servlet-test.xml"})
#Transactional
public class SongDaoTest {
#Autowired
public I_PlaceDao placeDao;
#Autowired
public I_PlayListDao playListDao;
#Autowired
public I_SongDao songDao;
#Before
public void prepare() throws Exception
{
Operation operation = sequenceOf(CommonOperations.DISABLE_CONTRAINTS, CommonOperations.DELETE_ALL,CommonOperations.INSERT_SONG_DATA, CommonOperations.ENABLE_CONTRAINTS);
DbSetup dbSetup = new DbSetup(new DriverManagerDestination("jdbc:mysql://localhost:3306/jukebox", "root", "mpsbart"), operation);
dbSetup.launch();
FullTextSession fullTextSession = Search.getFullTextSession(placeDao.getSession());
fullTextSession.createIndexer().startAndWait();
}
#Test
#Rollback(false)
public void searchTest()
{
PlayList playList = playListDao.read(1l);
List<Song> songs = songDao.search(playList, "offspring", 1, 10);
assertEquals(10, songs.size());
}
The search method implementation is:
#SuppressWarnings("unchecked")
public List<Song> search(PlayList playlist, String searchTerm,int page,int limit)
{
FullTextSession fullTextSession = Search.getFullTextSession(getSession());
QueryBuilder queryBuilder = fullTextSession.getSearchFactory().buildQueryBuilder().forEntity(Song.class).get();
BooleanQuery luceneQuery = new BooleanQuery();
luceneQuery.add(queryBuilder.keyword().onFields("name","album","artist").matching("*"+searchTerm+"*").createQuery(), BooleanClause.Occur.MUST);
luceneQuery.add(queryBuilder.phrase().onField("playList.place.id").sentence("\""+playlist.getPlace().getId()+"\"").createQuery(), BooleanClause.Occur.MUST);
luceneQuery.add(queryBuilder.phrase().onField("playList.id").sentence("\""+playlist.getId()+"\"").createQuery(), BooleanClause.Occur.MUST);
// wrap Lucene query in a javax.persistence.Query
FullTextQuery query = fullTextSession.createFullTextQuery(luceneQuery, Song.class);
org.apache.lucene.search.Sort sort = new Sort(new SortField("n_votes",SortField.INT));
query.setSort(sort);
List<Song> songs = query.setFirstResult(page*limit).setMaxResults(limit).list();
return songs;
}
The test result fails, it does not find any matching object. When using luke lucene I can see that there are results, if I try the query generated by hibernate on luke it does return elements. The query generated by hibernate is: +(name:metallica album:metallica artist:metallica) +playList.place.id:"1" +playList.id:"1"
I have also noticed on luke lucene that some index terms have a length up to six characters, for an instance, one song's artist it's "The Offspring" and the terms stored in the index are "the" and "offspr". The first one it's ok, but shouldn't the second term be "offspring". Why is it truncating the name?
In case it helps anybody, I was able to fix it by changing the query to this:
FullTextSession fullTextSession = org.hibernate.search.Search.getFullTextSession(getSession());
QueryBuilder qb = fullTextSession.getSearchFactory().buildQueryBuilder().forEntity(Song.class).get();
if(searchTerm==null || searchTerm.equals(""))
searchTerm="*";
else
searchTerm="*"+searchTerm+"*";
Query luceneQuery1 = qb.bool()
.should(qb.keyword().wildcard().onField("name").matching(searchTerm).createQuery())
.should(qb.keyword().wildcard().onField("album").matching(searchTerm).createQuery())
.should(qb.keyword().wildcard().onField("artist").matching(searchTerm).createQuery()).createQuery();
Query luceneQuery2 = qb.bool()
.must(qb.keyword().wildcard().onField("playList.place.id").matching(playlist.getPlace().getId()).createQuery())
.must(qb.keyword().wildcard().onField("playList.id").matching(playlist.getId()).createQuery())
.createQuery();
BooleanQuery finalLuceneQuery=new BooleanQuery();
finalLuceneQuery.add(luceneQuery1, BooleanClause.Occur.MUST);
finalLuceneQuery.add(luceneQuery2, BooleanClause.Occur.MUST);
FullTextQuery query = fullTextSession.createFullTextQuery(finalLuceneQuery, Song.class);
org.apache.lucene.search.Sort sort = new Sort(new SortField("nvotes",SortField.INT,true));
query.setSort(sort);
List<Song> songs = query.setFirstResult(page*limit).setMaxResults(limit).list();
in case of you have check that field value is null or not null then you must add following line on field where indexing field in class
#Field(index=Index.YES,analyze=Analyze.NO,store=Store.YES,indexNullAs=Field.DEFAULT_NULL_TOKEN)
Search on field
if you want null value then
booleanQuery.must(qb.keyword().onField("callReminder").matching("null").createQuery());
if you don't want null value
booleanQuery.must(qb.keyword().onField("callReminder").matching("null").createQuery()).not();
refrence document:http://docs.jboss.org/hibernate/search/4.1/reference/en-US/html/search-mapping.html#search-mapping-entity
Related
I have a spring boot project with apache camel (Using maven dependencies: camel-spring-boot-starter, camel-jpa-starter, camel-endpointdsl).
There are the following 3 entities:
#Entity
#Table(name = RawDataDelivery.TABLE_NAME)
#BatchSize(size = 10)
public class RawDataDelivery extends PersistentObjectWithCreationDate {
protected static final String TABLE_NAME = "raw_data_delivery";
private static final String COLUMN_CONFIGURATION_ID = "configuration_id";
private static final String COLUMN_SCOPED_CALCULATED = "scopes_calculated";
#Column(nullable = false, name = COLUMN_SCOPED_CALCULATED)
private boolean scopesCalculated;
#OneToMany(mappedBy = "rawDataDelivery", fetch = FetchType.LAZY)
private Set<RawDataFile> files = new HashSet<>();
#CollectionTable(name = "processed_scopes_per_delivery")
#ElementCollection(targetClass = String.class)
private Set<String> processedScopes = new HashSet<>();
// Getter/Setter
}
#Entity
#Table(name = RawDataFile.TABLE_NAME)
#BatchSize(size = 100)
public class RawDataFile extends PersistentObjectWithCreationDate {
protected static final String TABLE_NAME = "raw_data_files";
private static final String COLUMN_CONFIGURATION_ID = "configuration_id";
private static final String COLUMN_RAW_DATA_DELIVERY_ID = "raw_data_delivery_id";
private static final String COLUMN_PARENT_ID = "parent_file_id";
private static final String COLUMN_IDENTIFIER = "identifier";
private static final String COLUMN_CONTENT = "content";
private static final String COLUMN_FILE_SIZE_IN_BYTES = "file_size_in_bytes";
#ManyToOne(optional = true, fetch = FetchType.LAZY)
#JoinColumn(name = COLUMN_RAW_DATA_DELIVERY_ID)
private RawDataDelivery rawDataDelivery;
#Column(name = COLUMN_IDENTIFIER, nullable = false)
private String identifier;
#Lob
#Column(name = COLUMN_CONTENT, nullable = true)
private Blob content;
#Column(name = COLUMN_FILE_SIZE_IN_BYTES, nullable = false)
private long fileSizeInBytes;
// Getter/Setter
}
#Entity
#TypeDef(name = "jsonb", typeClass = JsonBinaryType.class)
#Table(name = RawDataRecord.TABLE_NAME, uniqueConstraints = ...)
public class RawDataRecord extends PersistentObjectWithCreationDate {
public static final String TABLE_NAME = "raw_data_records";
static final String COLUMN_RAW_DATA_FILE_ID = "raw_data_file_id";
static final String COLUMN_INDEX = "index";
static final String COLUMN_CONTENT = "content";
static final String COLUMN_HASHCODE = "hashcode";
static final String COLUMN_SCOPE = "scope";
#ManyToOne(optional = false)
#JoinColumn(name = COLUMN_RAW_DATA_FILE_ID)
private RawDataFile rawDataFile;
#Column(name = COLUMN_INDEX, nullable = false)
private long index;
#Lob
#Type(type = "jsonb")
#Column(name = COLUMN_CONTENT, nullable = false, columnDefinition = "jsonb")
private String content;
#Column(name = COLUMN_HASHCODE, nullable = false)
private String hashCode;
#Column(name = COLUMN_SCOPE, nullable = true)
private String scope;
}
What I try to do is to build a route with apache camel which selects all deliveries having the flag "scopesCalculated" == false and calculate/update the scope variable of all records attached to the files of this deliveries. This should happen in one database transaction. If all scopes are updated I want to set the scopesCalculated flag to true and commit the changes to the database (in my case postgresql).
What I have so far is this:
String r3RouteId = ...;
var dataSource3 = jpa(RawDataDelivery.class.getName())
.lockModeType(LockModeType.NONE)
.delay(60).timeUnit(TimeUnit.SECONDS)
.consumeDelete(false)
.query("select rdd from RawDataDelivery rdd where rdd.scopesCalculated is false and rdd.configuration.id = " + configuration.getId())
;
from(dataSource3)
.routeId(r3RouteId)
.routeDescription(configuration.getName())
.messageHistory()
.transacted()
.process(exchange -> {
RawDataDelivery rawDataDelivery = exchange.getIn().getBody(RawDataDelivery.class);
rawDataDelivery.setScopesCalculated(true);
})
.transform(new Expression() {
#Override
public <T> T evaluate(Exchange exchange, Class<T> type) {
RawDataDelivery rawDataDelivery = exchange.getIn().getBody(RawDataDelivery.class);
return (T)rawDataDelivery.getFiles();
}
})
.split(bodyAs(Iterator.class)).streaming()
.transform(new Expression() {
#Override
public <T> T evaluate(Exchange exchange, Class<T> type) {
RawDataFile rawDataFile = exchange.getIn().getBody(RawDataFile.class);
// rawDataRecordJpaRepository is an autowired interface by spring with the following method:
// #Lock(value = LockModeType.NONE)
// Stream<RawDataRecord> findByRawDataFile(RawDataFile rawDataFile);
// we may have many records per file (100k and more), so we don't want to keep them all in memory.
// instead we try to stream the resultset and aggregate them by 500 partitions for processing
return (T)rawDataRecordJpaRepository.findByRawDataFile(rawDataFile);
}
})
.split(bodyAs(Iterator.class)).streaming()
.aggregate(constant("all"), new GroupedBodyAggregationStrategy())
.completionSize(500)
.completionTimeout(TimeUnit.SECONDS.toMillis(5))
.process(exchange -> {
List<RawDataRecord> rawDataRecords = exchange.getIn().getBody(List.class);
for (RawDataRecord rawDataRecord : rawDataRecords) {
rawDataRecord.setScope("abc");
}
})
;
Basically this is working, but I have the problem that the records of the last partition will not be updated. In my example I have 43782 records but only 43500 are updated. 282 remain with scope == null.
I really don't understand the JPA transaction and session management of camel and I can't find some examples on how to update JPA/Hibernate entities with camel (without using SQL component).
I already tried some solutions but none of them are working. Most attempts end with "EntityManager/Session closed", "no transaction is in progress" or "Batch update failed. Expected result 1 but was 0", ...
I tried the following:
to set jpa(...).joinTransaction(false).advanced().sharedEntityManager(true)
use .enrich(jpa(RawDataRecord.class.getName()).query("select rec from RawDataRecord rec where rawDataFile = ${body}")) instead of .transform(...) with JPA repository for the records
using hibernate session from camel headers to update/save/flush entities: "Session session = exchange.getIn().getHeader(JpaConstants.ENTITY_MANAGER, Session.class);"
try to update over new jpa component at the end of the route:
.split(bodyAs(Iterator.class)).streaming()
.to(jpa(RawDataRecord.class.getName()).usePersist(false).flushOnSend(false))
Do you have any other ideas / recommendations?
I ve document named plan that correspond plan entity
#Entity
#Table(name = "plan")
#Cache(usage = CacheConcurrencyStrategy.READ_WRITE)
#org.springframework.data.elasticsearch.annotations.Document(indexName = "plan")
public class Plan extends AbstractAuditingEntity implements Serializable {
#Id
#GeneratedValue
#Column(name = "id")
#Field(type = FieldType.Text , fielddata = true)
private UUID id;
#NotNull
#Column(name = "name", nullable = false, unique = true)
private String name;
#Column(name = "description")
private String description;
#Column(name = "current_price")
#Field(type = FieldType.Text , fielddata = true )
private Float currentPrice;
}
Here my method search implementation
public Page<Plan> search(String query, Pageable pageable) {
NativeSearchQuery nativeSearchQuery = new NativeSearchQuery(queryStringQuery("*"+query+"*").defaultOperator(Operator.AND));
nativeSearchQuery.setPageable(pageable);
List<Plan> hits = elasticsearchTemplate
.search(nativeSearchQuery, Plan.class)
.map(SearchHit::getContent)
.stream()
.collect(Collectors.toList());
return new PageImpl<>(hits, pageable, hits.size());
}
Name and Description are searchable but float field isn't .
Marking as FieldType.Float doesn't give expected result .
I have two tables and I need OneToOne mapping with where clause.
select * from person_details inner join address_details
on address_details.pid=person_details.pid AND person_details.exist_flag = 'Y' AND address_details.address_exist_flag = 'Y'
Table 1
public class PersonDetails {
#Id
private String pid;
#Column(name = "first_name")
private String firstName;
#Column(name = "last_name")
private String lastName;
#Column(name = "exist_flag")
private String existFlag;
#OneToOne(mappedBy = "personDetails", cascade = CascadeType.ALL)
#Where(clause = "addressExistFlag = 'Y'")
private AddressDetails addressDetails;
}
Table 2
#Data
#NoArgsConstructor
#Entity
#Table(name = "address_details")
public class AddressDetails {
#Id
private String pid;
private String street;
#Column(name = "address_exist_flag")
private String addressExistFlag;
#OneToOne(cascade = CascadeType.ALL, fetch = FetchType.LAZY)
#JoinColumn(name = "pid", insertable = false, updatable = false)
private PersonDetails personDetails;
}
I need data to be fetched if both addressExistFlag = 'Y' and existFlag = 'Y'.
With current scenario If I am trying to fetch data via spring batch read repository as below, only existFlag = 'Y' is considered. Is it because of incorrect mapping or the way I have used in spring batch
ReadRepository looks like below
public interface PersonDetailsRepository extends JpaRepository<PersonDetails, String> {
Page<PersonDetails> findByExistFlag(String existFlag, Pageable pageable);
}
Spring batch read repository looks like below
#Bean
RepositoryItemReader<PersonDetails> personDetailsItemReader() {
Map<String, Sort.Direction> sort = new HashMap<>();
sort.put("ExistFlag", Sort.Direction.ASC);
return new RepositoryItemReaderBuilder<PersonDetails>()
.repository(personDetailsRepository)
.methodName("findByExistFlag")
.arguments("Y")
.sorts(sort)
.name("personDetailsItemReader")
.build();
}
You are only querying for existsFlag.
You have to add the other Flag too:
public interface PersonDetailsRepository extends JpaRepository<PersonDetails, String> {
Page<PersonDetails> findByExistFlagAndAddressDetailsAddressExistFlag(
String existFlag, String addressExistFlag, Pageable pageable);
}
#Bean
RepositoryItemReader<PersonDetails> personDetailsItemReader() {
Map<String, Sort.Direction> sort = new HashMap<>();
sort.put("ExistFlag", Sort.Direction.ASC);
return new RepositoryItemReaderBuilder<PersonDetails>()
.repository(personDetailsRepository)
.methodName("findByExistFlagAndAddressDetailsAddressExistFlag")
.arguments("Y", "Y")
.sorts(sort)
.name("personDetailsItemReader")
.build();
}
I am using Hibernate Search with Spring Boot to create a searchable rest api. Trying to POST an instance of "Training", I receive the following stack traces. None of the two are very insightful to me which is why I am reaching out for help.
Stack trace:
https://pastebin.com/pmurg1N3
It appears to me that it is trying to index a null entity!? How can that happen? Any ideas?
The entity:
#Entity #Getter #Setter #NoArgsConstructor
#ToString(onlyExplicitlyIncluded = true)
#Audited #Indexed(index = "Training")
#AnalyzerDef(name = "ngram",
tokenizer = #TokenizerDef(factory = StandardTokenizerFactory.class ),
filters = {
#TokenFilterDef(factory = StandardFilterFactory.class),
#TokenFilterDef(factory = LowerCaseFilterFactory.class),
#TokenFilterDef(factory = StopFilterFactory.class),
#TokenFilterDef(factory = NGramFilterFactory.class,
params = {
#Parameter(name = "minGramSize", value = "2"),
}
)
}
)
#Analyzer(definition = "ngram")
public class Training implements BaseEntity<Long>, OwnedEntity {
#Id
#GeneratedValue
#ToString.Include
private Long id;
#NotNull
#RestResourceMapper(context = RestResourceContext.IDENTITY, path = "/companies/{id}")
#JsonProperty(access = Access.WRITE_ONLY)
#JsonDeserialize(using = RestResourceURLSerializer.class)
private Long owner;
#NotNull
#Field(index = Index.YES, analyze = Analyze.YES, store = Store.YES)
private String name;
#Column(length = 10000)
private String goals;
#Column(length = 10000)
private String description;
#Enumerated(EnumType.STRING)
#Field(index = Index.YES, store = Store.YES, analyze = Analyze.NO, bridge=#FieldBridge(impl=EnumBridge.class))
private Audience audience;
#Enumerated(EnumType.STRING)
#Field(index = Index.YES, store = Store.YES, analyze = Analyze.NO, bridge=#FieldBridge(impl=EnumBridge.class))
private Level level;
#ManyToMany
#Audited(targetAuditMode = RelationTargetAuditMode.NOT_AUDITED)
#NotNull #Size(min = 1)
#IndexedEmbedded
private Set<ProductVersion> versions;
#NotNull
private Boolean enabled = false;
#NotNull
#Min(1)
#IndexedEmbedded
#Field(index = Index.YES, store = Store.YES, analyze = Analyze.NO)
#NumericField
private Integer maxStudents;
#NotNull
#ManyToOne(fetch = FetchType.LAZY)
private Agenda agenda;
#NotNull
#Min(1)
#Field(index = Index.YES, store = Store.YES, analyze = Analyze.NO)
#NumericField
private Integer durationDays;
#IndexedEmbedded
#Audited(targetAuditMode = RelationTargetAuditMode.NOT_AUDITED)
#ManyToMany(cascade = CascadeType.PERSIST)
private Set<Tag> tags = new HashSet<>();
I'd say either your versions collection or your tags collection contains null objects, which is generally not something we expect in a Hibernate ORM association, and apparently not something Hibernate Search expects either.
Can you check that in debug mode?
Followed this question but did not work
Have two entities Account and UserTransaction
Account.java
#Entity
#Access(AccessType.FIELD)
public class Account {
#Id
private Integer accountNumber;
private String holderName;
private String mobileNumber;
private Double balanceInformation;
public Account(Integer accountNumber, String holderName, String mobileNumber, Double balanceInformation) {
this.accountNumber = accountNumber;
this.holderName = holderName;
this.mobileNumber = mobileNumber;
this.balanceInformation = balanceInformation;
}
}
UserTransaction.java
#Entity
#Access(AccessType.FIELD)
#Table(name = "user_transaction")
public class Transaction {
#Id
private Long transactionId;
#ManyToOne
#JoinColumn(name = "accountNumber")
private Account accountNumber;
private Double transactionAmount;
#Column(nullable = false, columnDefinition = "TINYINT", length = 1)
private Boolean transactionStatus;
private String statusMessage;
#Temporal(TemporalType.TIMESTAMP)
#Column(name="timestamp", columnDefinition="TIMESTAMP DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP")
private Date timestamp;
public Transaction(Long transactionId, Account account,
Double transactionAmount,
Boolean transactionStatus,
String statusMessage) {
this.transactionId = transactionId;
this.accountNumber = account;
this.transactionAmount = transactionAmount;
this.transactionStatus = transactionStatus;
this.statusMessage = statusMessage;
}
}
and My TransactionRepository is as follows
#RepositoryRestResource(collectionResourceRel = "transactions", path = "transactions")
public interface JpaTransactionRepository extends JpaRepository<Transaction, Long>, TransactionRepository {
#Query(value = "select t from Transaction t where t.accountNumber.accountNumber = :accountNumber")
Iterable<Transaction> findByAccountNumber(#Param("accountNumber") Integer accountNumber);
}
I have constructed a json as specified in the stackoverflow post at the top
{
"transactionId" : "3213435454342",
"transactionAmount" : 5.99,
"transactionStatus" : true,
"statusMessage" : null,
"timestamp" : "2017-03-09T05:11:41.000+0000",
"accountNumber" : "http://localhost:8080/accounts/90188977"
}
when I try to execute POST with the above json I get
Caused by: java.sql.SQLIntegrityConstraintViolationException: Column 'account_number' cannot be null
at com.mysql.cj.jdbc.exceptions.SQLError.createSQLException(SQLError.java:533)
at com.mysql.cj.jdbc.exceptions.SQLError.createSQLException(SQLError.java:513)
at com.mysql.cj.jdbc.exceptions.SQLExceptionsMapping.translateException(SQLExceptionsMapping.java:115)
at com.mysql.cj.jdbc.ConnectionImpl.execSQL(ConnectionImpl.java:1983)
How do I save an entity that has relationships with Spring data rest????
The problem is that with #JoinColumn(name = "accountNumber") you would hard-code the column name in database as accountNumber. Normally the naming-strategy would add embedded underscores instead of having mixed case column names.
So it should work if you change the line to #JoinColumn(name = "account_number").