Factory bean 'customConversions' not found - spring

I'm trying to update my MongoDB libraries to the latest, and now have a couple of errors, which might or might not be related. The first is in my applicationContext.xml, where I have the error "Factory bean 'customConversions' not found" next to this section:
<mongo:mapping-converter base-package="com.my.model">
<mongo:custom-converters base-package="com.my.model.converters">
</mongo:custom-converters>
</mongo:mapping-converter>
I can't see from the docs anything I might be missing. What could be causing this, and what can I do to fix?
If I try to run the app, I now get:
org.springframework.data.mapping.model.MappingException: No mapping metadata found for java.util.Date
at org.springframework.data.mongodb.core.convert.MappingMongoConverter.read(MappingMongoConverter.java:206) ~[spring-data-mongodb-1.1.1.RELEASE.jar:na]
I'm using the following Maven Dependencies:
org.springframework.data: spring-data-mongodb: 1.1.1.RELEASE
org.springframework: core, spring-context, etc.: 3.2.1.RELEASE
Is this just a broken release, or am I doing something else wrong? I had no issues using java.util.Date in my model classes before.

Did you add it to the MongoTemplate? http://static.springsource.org/spring-data/data-mongo/docs/1.0.0.M5/reference/html/#d0e2718

OK, this is some time later, but it might benefit people upgrading code and database from legacy MongoDB versions.
I think that Mongo changed the way some data was stored internally. Or it could be that we exported data to JSON and then imported it again. Either way, we were left with data that had both incorrect Date and incorrect ObjectId representations. spring-data-mongo used to handle this but for whatever reason don't any longer. The fix for us was to run the following type of script in the Mongo shell:
db.entity.find().forEach(
function(o){
delete o._id;
if (typeof(o.createdTs) !== 'undefined' && typeof(o.createdTs.sec) !== 'undefined'){
o.createdTs = ISODate(o.createdTs.sec);
}
if (typeof(o.updatedTs) !== 'undefined' && typeof(o.updatedTs.sec) !== 'undefined'){
o.updatedTs = ISODate(o.updatedTs.sec);
}
try{
db.entity2.insert( o );
} catch (err){
print("Following node conversion failed. Error is: " + err);
printjson(o);
}
}
);
db.entity2.renameCollection('entity', true);
Now this worked for us because we weren't using the Mongo Object ID at all - we've been using a different, uniquely indexed UUID field as an ID instead. If you're referring to objectId at all anywhere else, you will need to create an objectId from the old string id and use that.
This has enabled us to upgrade to spring-data-1.1.0 and beyond, and has meant that we can now introduce spring-data-neo4j, which we were previously unable to do with this project due to this issue.

I had the same mapping exception (org.springframework.data.mapping.model.MappingException). One of the dates in the MongoDB records somehow had a date in the following format that could not be decoded by java.util.date:
"createdTime": {
"dateTime": ISODate("2016-09-15T02:01:00.560Z"),
"offset": {
"_id": "Z",
"totalSeconds": 0
},
"zone": {
"_class": "java.time.ZoneOffset",
"_id": "Z",
"totalSeconds": 0
}
}
Everything worked fine after I deleted that record.

Related

How to add reserved keywords in liquibase OracleDatabase?

Trying to make my spring boot JPA application compliant with Oracle DB, already running with MySQL and H2.
The data generated by liquibase unfortunately uses some of Oracle's reserved keywords as table or column names.
The good news is that hibernate and liquibase implementations can detect these keywords and "quote" them when querying database (using objectQuotingStrategy="QUOTE_ONLY_RESERVED_KEYWORDS" for liquibase, and spring.jpa.properties.hibernate.auto_quote_keyword: true for hibernate).
The bad news is hibernate and liquibase do not share the same list of reserved keywords for Oracle.
For example, value is not recognized as a reserved keyword by liquibase, but is by hibernate (which uses ANSI SQL:2003 keywords).
One of my liquibase changeSets creates a table with a lower case value column, so Liquibase creates the table with an unquoted lowercase value column, and Oracle DB turns it automatically in an upper case VALUE column. Now when hibernate tries to fetch
that column, it recognizes value and quotes it (`SELECT "value" from ...), which makes it case-sensitive, so the column is not found (ORA-00904).
I thought I found a workaround for it by extending SpringLiquibase and adding my custom keywords, as described here : https://liquibase.jira.com/browse/CORE-3324. The problem is that this does not seem to work with OracleDatabase implementation, which overwrites SpringLiquibase's set of reserved keywords (and of course, the isReservedWord() method uses OracleDatabase's set).
For now, I'll use the QUOTE_ALL_OBJECTS quoting strategy for liquibase and hibernate.globally_quoted_identifiers.
But, just out of curiosity, I wanted to know if the set of reserved keywords used by liquibase for Oracle could be appended.
spring boot version: 2.3.9.RELEASE.
hibernate-core version (spring boot dependency): 5.4.28
liquibase-core version (spring boot dependency): 3.8.9
Hmm in case of Oracle you have keywords and reserved words.
reserved words can not be used as identifiers
keywords can be used as identifiers but it is not recommened.
You can get list of them directly from database:
select KEYWORD, RESERVED from v$reserved_words;
...
1864 rows selected
What about using uppercase names everywhere in the source code?
It looks like Liqubase depends on some JDBC driver functionally - which does not work.
OracleDatabase.java:
public void setConnection(DatabaseConnection conn) {
//noinspection HardCodedStringLiteral,HardCodedStringLiteral,HardCodedStringLiteral,HardCodedStringLiteral,
// HardCodedStringLiteral
reservedWords.addAll(Arrays.asList("GROUP", "USER", "SESSION", "PASSWORD", "RESOURCE", "START", "SIZE", "UID", "DESC", "ORDER")); //more reserved words not returned by driver
Connection sqlConn = null;
if (!(conn instanceof OfflineConnection)) {
try {
/*
* Don't try to call getWrappedConnection if the conn instance is
* is not a JdbcConnection. This happens for OfflineConnection.
* see https://liquibase.jira.com/browse/CORE-2192
*/
if (conn instanceof JdbcConnection) {
sqlConn = ((JdbcConnection) conn).getWrappedConnection();
}
} catch (Exception e) {
throw new UnexpectedLiquibaseException(e);
}
if (sqlConn != null) {
tryProxySession(conn.getURL(), sqlConn);
try {
//noinspection HardCodedStringLiteral
reservedWords.addAll(Arrays.asList(sqlConn.getMetaData().getSQLKeywords().toUpperCase().split(",\\s*")));
} catch (SQLException e) {
//noinspection HardCodedStringLiteral
Scope.getCurrentScope().getLog(getClass()).info("Could get sql keywords on OracleDatabase: " + e.getMessage());
//can not get keywords. Continue on
}
If Liquibase calls sqlConn.getMetaData().getSQLKeywords() and this does not return proper output, then your chances are limited. It might be a bug in JDBC drivers, or your application does not have SELECT_CATALOG_ROLE privilege and does not see v$reserved_words view (if JDBC queries this internally).

How to configure Serilog Elasticsearch error hanling in appsettings in .NET Core 3.1?

I am trying to configure Serilog in .NET Core 3.1 (C#) project, but I want to do that completely in appsettings.json. For file sinks I did all configuration w/o any problem, but for elasticsearch I can't figure out how to write rows belows into appsettings.json so that it works:
.WriteTo.Elasticsearch(new ElasticsearchSinkOptions(new Uri("http://localhost:9200"))
{
FailureCallback = e => Console.WriteLine("Unable to submit event " + e.MessageTemplate),
EmitEventFailure = EmitEventFailureHandling.WriteToSelfLog |
EmitEventFailureHandling.WriteToFailureSink |
EmitEventFailureHandling.RaiseCallback,
FailureSink = new FileSink("./failures.txt", new JsonFormatter(), null)
})
Official documentation shows just basic example for EmitEventFailure as follows:
"emitEventFailure": "WriteToSelfLog"
It doesn't show how combination (multiple flags) of EmitEventFailures should be written. Same situation for FailureSink:
"failureSink": "My.Namespace.MyFailureSink, My.Assembly.Name"
I don't know what exactly this means and I can't figure it out for code sample listed above.
Finally, for FailureCallback the documentation doesn't mention any option to do this through appsettings.json. But this option is not a big deal for me, at worst I can omit it.
Thanks for any hints!
After long hours of research on this, I came up with following answer.
For "emitEventFailure" property, we only need to give comma between values like this:
"emitEventFailure": "WriteToFailureSink, WriteToSelfLog, RaiseCallback",
For the failureSink property, You can refer to this link for complex parameter value binding. We need to set the "type" of the object before specify each of parameters including the parameter's object.
In my example is to set azure blob storage as the failure sink:
"failureSink": {
"type": "Serilog.Sinks.AzureBlobStorage.AzureBatchingBlobStorageSink, Serilog.Sinks.AzureBlobStorage",
"blobServiceClient": {
"type": "Azure.Storage.Blobs.BlobServiceClient, Azure.Storage.Blobs",
"connectionString": "DefaultEndpointsProtocol=https;AccountName=xyz;AccountKey=xyzkey;EndpointSuffix=xyz.net"
},
"textFormatter": "Serilog.Formatting.Elasticsearch.ElasticsearchJsonFormatter, Serilog.Formatting.Elasticsearch",
"storageContainerName": "test-api",
"storageFileName": "test/{yyyy}-{MM}-{dd}.log",
"period": "0.00:00:30",
"batchSizeLimit": "1000"
}

Project New Array Field with Spring Data

As part of an aggregate operation, I need to unwind an array. I am wondering how I can put the object back into an array as part of the project. Here is the MongoDB aggregate operation that works:
db.users.aggregate([ { "$match" : {...} , { "$unwind" : "$profiles"} ,{$project: {'profiles': ['$profiles']}}...}
And more specifically, how can I implement this using Spring Data mongoDB ProjectionOperation:
{$project: {'profiles': ['$profiles']}}
This feature has been added since 3.2.
Edit 1:
I looked through some of the posts and one answer by
Christoph Strobl:
and based on the answer I came up with something that works which is as follows:
AggregationOperation project = aggregationOperationContext -> {
Document projection = new Document();
projection.put("profiles", Arrays.<Object> asList("$profiles"));
projection.put("_id","$id");
return new Document("$project", projection);
};
I am wondering if there is a better way of doing it though.
Any help/suggestion is very much appreciated. Thanks.
Unfortunately there is not.
You can replace $project by project() with an AggregationExpression to shorten it a bit.
// ...
unwind("profiles"),
project().and(ctx -> new Document("profiles", asList("$profiles"))).as("profiles")
I created DATAMONGO-2312 to provide support for new array field projections in one of the next versions.

Repeated column mapping on JPA entity mapping

I've lost an entire day trying to understand what is going on and find a fix. I have a JPA mapped entity that, besides other properties, has the following:
#Entity
#Table(name = "xyz")
data class XYZ(
...
#Column(name = "status", nulable = false)
#Enumerated(EnumType.STRING)
private var initialStatus: XYZStatus,
...
) {
#Transient
var status: XYZStatus = initialStatus
get() = initialStatus
set(nextStatus) {
...
initialStatus = nextStatus
field = nextStatus
}
}
This have been working forever, since this class was first created. Now the situation is that everytime I run my integration tests on Intellij IDEA (Ultimate 2018.2) they fail because the Spring context cannot be created. The error is: Caused by: org.hibernate.MappingException: Repeated column in mapping for entity: model.XYZ column: status (should be mapped with insert="false" update="false").
The bizarre part: this error happens only on my machine, only when running the tests from inside the IDE. If I run the tests via Maven on the command line it is ok. I already tried to change the field name from status to something else and the error just changes for the "something else" name I give to the variable.
I already removed and cloned my repo again. Already removed and reinstalled Intellij. I really don't know what can be the source of this error. Any ideas?
Thanks!

Enabling nested any queries on OData V4 endpoints with WebAPI

I'm trying to build a nested any query like this ...
~/Api/Calendar?$filter=Roles/any(r:r/User/any(u:u/Name eq 'Joe
Bloggs'))
if I remove the inner any clause leaving me with ...
~/Api/Calendar?$filter=Roles/any(r:r/User/any())
... then the endpoint returns ...
{
"error": {
"code": "",
"message": "The query specified in the URI is not valid. The Any/All nesting limit of '1' has been exceeded. 'MaxAnyAllExpressionDepth' can be configured on ODataQuerySettings or EnableQueryAttribute."
}
}
... which I think lends some light on the problem but I actually have here.
So far I have tried to raise this limit with this during my context initialisation but it doesn't appear to be working ....
config.AddODataQueryFilter(new EnableQueryAttribute { MaxAnyAllExpressionDepth = 3 });
does anyone have any ideas how I can do this globally (i don't want to have to go to every get action on every controller and set the depth.
UPDATE:
So it turns out where I inherit from my own baseEntityController, on the actions I had the EnableQuery attribute which superceeded my global config change hense the reason my changes were not respected.
Simply removing the attribute from the actions themselves has all controllers that inherit from my base working with this new nested any and all limit, but i seem to now have the side effect that expands don't work any more ...
var query = new EnableQueryAttribute {
MaxExpansionDepth = 8,
PageSize = 100,
MaxAnyAllExpressionDepth = 3,
AllowedFunctions = System.Web.OData.Query.AllowedFunctions.All,
AllowedLogicalOperators = System.Web.OData.Query.AllowedLogicalOperators.All,
AllowedQueryOptions = System.Web.OData.Query.AllowedQueryOptions.All,
AllowedArithmeticOperators = System.Web.OData.Query.AllowedArithmeticOperators.All,
MaxTop = 1000
};
config.AddODataQueryFilter(query);
.. as you can see I tried throwing lots of extras in there but it's not having any of it!
The simplest way I found to do this and have everything work is to apply the attribute on the base controller actions, it therefore applied everything correctly to the actions on that controller or any of it's derived types.
It wasn't my ideal but I couldn't find a way to get a global fix for this to work as part of initialising the odata context.
Hopefully this will help someone out there.

Resources