I'm upgrading a Spring 3 application to Spring 4. My #Repository has ParameterizedRowMapper objects to map the SQL results to objects. But since Spring 4 that interface has been deprecated "in favor of the regular SingleColumnRowMapper". But I use mappers for mapping multiple columns. How am I meant to map multiple columns using SingleColumnRowMapper? Or am I meant to be doing something completely different?
For example, here is the kind of code I have now:
private static final ParameterizedRowMapper<Thing> THING_ENTRY_MAPPER = new ParameterizedRowMapper<Thing>() {
#Override
public Thing mapRow(ResultSet rs, int rowNum) throws SQLException {
return new Thing(rs.getLong(1), rs.getLong(2), rs.getInt(3));
}
};
#Override
public List<Thing> getThings(
ID id, long start, long end) {
final Map<String, Object> params = new HashMap<String, Object>(4);
putIDParams(params, id);
putTimeRangeParams(params, start, end);
return getNamedParameterJdbcTemplate().query(QUERY_THING, params,
THING_ENTRY_MAPPER);
}
How should I be implementing that kind of functionality now?
The Javadoc seems to be wrong. The Spring Framework designers probably intend use of the RowMapper<Thing> interface to replace ParameterizedRowMapper<Thing>. That is, use the base interface.
Related
I'm going to use #InsertOnlyProperty with Spring Boot 2.7 as it will take time for us to migrate to Spring Boot 3.0!
So I'm going to create my DataAccessStrategy based on the DefaultAccessStrategy and also override the SqlParametersFactory so that I can pass the RelationalPersistentProperty::isInsertOnly condition to the getParameterSource method, also overriding RelationalPersistentProperty by adding isInsertOnly. And is there a way to override RelationalPersistentProperty to add isInsertOnly property. Am I correct or is there a better solution than switching to Spring Boot 3.0 now. Thank you!
Since #InsertOnlyProperty is only supported for the aggregate root (in Spring Boot 3.0), one approach could be to copy the data to a surrogate object and use a custom method to save it. It would look something like this:
public record MyAggRoot(#Id Long id,
/* #InsertOnlyProperty */ Instant createdAt, int otherField) {}
public interface MyAggRootRepository
extends Repository<MyAggRoot, Long>, MyAggRootRepositoryCustom { /* ... */ }
public interface MyAggRootRepositoryCustom {
MyAggRoot save(MyAggRoot aggRoot);
}
#Component
public class MyAggRootRepositoryCustomImpl implements TaskRepositoryCustom {
#Autowired
private final JdbcAggregateOperations jao;
// Override table name which would otherwise be derived from the class name
#Table("my_agg_root")
private record MyAggRootForUpdate(#Id Long id, int otherField) {}
#Override
public MyAggRoot save(MyAggRoot aggRoot) {
// If this is a new instance, insert as-is
if (aggRoot.id() == null) return jao.save(aggRoot);
// Create a copy without the insert-only field
var copy = new MyAggRootForUpdate(aggRoot.id(), aggRoot.otherField());
jao.update(copy);
return aggRoot;
}
}
It is however a bit verbose so it would only be a reasonable solution if you only need it in a few places.
I have a Spring + MongoDB application and I need to perform a 3 level operation among 3 collections.
Let's say I have collection A, B and C.
When a user creates an object (document) of type A, then I need to create an object in B (linked to object A) and an object C (linked to both A and B).
Same for UPDATE operations.
So, if any error occurs, I wound like to revert/roll back the whole operation.
Right now, I have a WriteConflict Error whenever two updates/saves are performed on the same collections.
I've always used MySQL and I'm quite to new to Mongo, so any tips will be appreciated.
In this thread in the Mongo Community, the developer just says "let it be" but I'm pretty positive there are other ways and also I really don't want WriteConflict errors (just because + performance issues).
I'm also not sure about the Write concerns concepts here, if they could help or not.
My MongoConfig class looks like this:
#Configuration
#EnableMongoRepositories(basePackages = "it.my.mongodb.repository")
public class MongoConfig extends AbstractMongoClientConfiguration {
#Value("${spring.data.mongodb.uri}")
String databaseUrl;
#Bean
MongoTransactionManager transactionManager(MongoDatabaseFactory dbFactory) {
return new MongoTransactionManager(dbFactory);
}
#Override
protected String getDatabaseName() {
return "myDatabase";
}
#Override
public MongoClient mongoClient() {
final ConnectionString connectionString = new ConnectionString(databaseUrl);
final MongoClientSettings mongoClientSettings = MongoClientSettings.builder()
.applyConnectionString(connectionString)
.build();
return MongoClients.create(mongoClientSettings);
}
}
The methods I want to be transactional are annotated with #Transactional (from spring framework import), are public and have no try/catch.
If needed I can provide more informations.
Thank you so much in advance!
I have simple object with two field which extending the hazelcast DataSerializable interface. putting in that map working fine but while retrieving with same name its showing an exception says : 'Problem while reading DataSerializable, namespace: 0, ID: 0'.
I am using hazelcast client '3.12.4' and hazelcast cluster with latest docker base image.
Please let me know guys if anyone of you faced the similar issue? I have not used any db as of now for simplicity purpose. my hazelcast client only saves a simple object in IMap and then retrieving from IMap.
Please find my code snippet below:
Domain Object:
public class Employee implements DataSerializable {
private String name;
private Integer serialNumber;
public Employee() {
}
public Employee(String name, Integer serialNumber) {
this.name = name;
this.serialNumber = serialNumber;
}
** Getter and Setter **
#Override
public void readData(ObjectDataInput in) throws IOException {
this.name = in.readUTF();
this.serialNumber = (Integer) in.readInt();
}
#Override
public void writeData(ObjectDataOutput out) throws IOException {
out.writeUTF(name);
out.writeInt(serialNumber);
}
Hazelcast save and get:
IMap<String, Employee> map = hazelcastInstance.getMap("employee");
map.put(employee.getName(), employee);
EntryObject e = new PredicateBuilder().getEntryObject();
Predicate predicate = e.get("serialNumber").lessThan(200);
Collection<Employee> result = map.values(predicate);
}
Thanks in advance for your help.
Based on the comment dialogue, the answer here seems to be two-fold
map.get(k) works fine, so the serialization logic itself appears to correct.
In order to query DataSerializable objects, the domain objects need to be on the classpath of the server. The server must fully deserialise each object of that type to determine if it matches against the query predicate. The requires the docker container's classpath extended using the "-e CLASSPATH" option.
My question is about what is best way to inhibit an endpoint that is automatically provided by Olingo?
I am playing with a simple app based on Spring boot and using Apache Olingo.On short, this is my servlet registration:
#Configuration
public class CxfServletUtil{
#Bean
public ServletRegistrationBean getODataServletRegistrationBean() {
ServletRegistrationBean odataServletRegistrationBean = new ServletRegistrationBean(new CXFNonSpringJaxrsServlet(), "/user.svc/*");
Map<String, String> initParameters = new HashMap<String, String>();
initParameters.put("javax.ws.rs.Application", "org.apache.olingo.odata2.core.rest.app.ODataApplication");
initParameters.put("org.apache.olingo.odata2.service.factory", "com.olingotest.core.CustomODataJPAServiceFactory");
odataServletRegistrationBean.setInitParameters(initParameters);
return odataServletRegistrationBean;
} ...
where my ODataJPAServiceFactory is
#Component
public class CustomODataJPAServiceFactory extends ODataJPAServiceFactory implements ApplicationContextAware {
private static ApplicationContext context;
private static final String PERSISTENCE_UNIT_NAME = "myPersistenceUnit";
private static final String ENTITY_MANAGER_FACTORY_ID = "entityManagerFactory";
#Override
public ODataJPAContext initializeODataJPAContext()
throws ODataJPARuntimeException {
ODataJPAContext oDataJPAContext = this.getODataJPAContext();
try {
EntityManagerFactory emf = (EntityManagerFactory) context.getBean(ENTITY_MANAGER_FACTORY_ID);
oDataJPAContext.setEntityManagerFactory(emf);
oDataJPAContext.setPersistenceUnitName(PERSISTENCE_UNIT_NAME);
return oDataJPAContext;
} catch (Exception e) {
throw new RuntimeException(e);
}
}
...
My entity is quite simple ...
#Entity
public class User {
#Id
private String id;
#Basic
private String firstName;
#Basic
private String lastName;
....
Olingo is doing its job perfectly and it helps me with the generation of all the endpoints around CRUD operations for my entity.
My question is : how can I "inhibit" some of them? Let's say for example that I don't want to enable the delete my entity.
I could try to use a Filter - but this seems a bit harsh. Are there any other, better ways to solve my problem?
Thanks for the help.
As you have said, you could use a filter, but then you are really coupled with the URI schema used by Olingo. Also, things will become complicated when you have multiple, related entity sets (because you could navigate from one to the other, making the URIs more complex).
There are two things that you can do, depending on what you want to achieve:
If you want to have a fined grained control on what operations are allowed or not, you can create a wrapper for the ODataSingleProcesor and throw ODataExceptions where you want to disallow an operation. You can either always throw exceptions (i.e. completely disabling an operation type) or you can use the URI info parameters to obtain the target entity set and decide if you should throw an exception or call the standard single processor. I have used this approach to create a read-only OData service here (basically, I just created a ODAtaSingleProcessor which delegates some calls to the standard one + overridden a method in the service factory to wrap the standard single processor in my wrapper).
If you want to completely un-expose / ignore a given entity or some properties, then you can use a JPA-EDM mapping model end exclude the desired components. You can find an example of such a mapping here: github. The mapping model is just an XML file which maps the JPA entities / properties to EDM entity type / properties. In order for olingo to pick it up, you can pass the name of the file to the setJPAEdmMappingModel method of the ODataJPAContext in your initialize method.
I need to execute integration tests using DbUnit. I have created to datasets (before and after test) and compare them using #DatabaseSetup and #ExpectedDatabase annotations. During test one new database row was created (it presents in after test dataset, which I specify using #ExpectedDatabase annotation). The problem is that row id is generating automatically (I am using Hibernate), so row id is changing permanently. Therefore my test pass only once and after that I need to change id in after test dataset, but this is not that I need. Can you suggest me please any solutions for this issue, if this issue can be resolved with DbUnit.
Solution A:
Use assigned id strategy and use a seperate query to retrieve next value in business logic. So you can always assign a known id in your persistence tests with some appropriate database cleanup. Note that this only works if you're using Oracle Sequence.
Solution B:
If I'm not mistaken, there are some methods similar to assertEqualsIngoreColumns() in org.dbunit.Assertion. So you can ignore the id assertion if you don't mind. Usually I'll compensate that with a not null check on id. Maybe there some options in #ExpectedDatabase but I'm not sure.
Solution C:
I'd like to know if there is a better solution because that solution A introduces some performance overhead while solution B sacrifices a little test coverage.
What version of dbunit you're using by the way. I have never seen these annotations in 2.4.9 and below, they looks easier to use.
This workaround is saving my skin till now:
I implemented a AbstractDataSetLoader with replacement feature:
public class ReplacerDataSetLoader extends AbstractDataSetLoader {
private Map<String, Object> replacements = new ConcurrentHashMap<>();
#Override
protected IDataSet createDataSet(Resource resource) throws Exception {
FlatXmlDataSetBuilder builder = new FlatXmlDataSetBuilder();
builder.setColumnSensing(true);
try (InputStream inputStream = resource.getInputStream()) {
return createReplacementDataSet(builder.build(inputStream));
}
}
/**
* prepare some replacements
* #param dataSet
* #return
*/
private ReplacementDataSet createReplacementDataSet(FlatXmlDataSet dataSet) {
ReplacementDataSet replacementDataSet = new ReplacementDataSet(dataSet);
//Configure the replacement dataset to replace '[null]' strings with null.
replacementDataSet.addReplacementObject("[null]", null);
replacementDataSet.addReplacementObject("[NULL]", null);
replacementDataSet.addReplacementObject("[TODAY]", new Date());
replacementDataSet.addReplacementObject("[NOW]", new Timestamp(System.currentTimeMillis()));
for (java.util.Map.Entry<String, Object> entry : replacements.entrySet()) {
replacementDataSet.addReplacementObject("["+entry.getKey()+"]", entry.getValue());
}
replacements.clear();
return replacementDataSet;
}
public void replace(String replacement, Object value){
replacements.put(replacement, value);
}
}
With this you could somehow track the ids you need and replace in your testes
#DatabaseSetup(value="/test_data_user.xml")
#DbUnitConfiguration(dataSetLoaderBean = "replacerDataSetLoader")
public class ControllerITest extends WebAppConfigurationAware {
//reference my test dbconnection so I can get last Id using regular query
#Autowired
DatabaseDataSourceConnection dbUnitDatabaseConnection;
//reference my datasetloader so i can iteract with it
#Autowired
ColumnSensingFlatXMLDataSetLoader datasetLoader;
private static Number lastid = Integer.valueOf(15156);
#Before
public void setup(){
System.out.println("setting "+lastid);
datasetLoader.replace("emp1", lastid.intValue()+1);
datasetLoader.replace("emp2", lastid.intValue()+2);
}
#After
public void tearDown() throws SQLException, DataSetException{
ITable table = dbUnitDatabaseConnection.createQueryTable("ids", "select max(id) as id from company.entity_group");
lastid = (Number)table.getValue(0, "id");
}
#Test
#ExpectedDatabase(value="/expected_data.xml", assertionMode=DatabaseAssertionMode.NON_STRICT)
public void test1() throws Exception{
//run your test logic
}
#Test
#ExpectedDatabase(value="/expected_data.xml", assertionMode=DatabaseAssertionMode.NON_STRICT)
public void test2() throws Exception{
//run your test logic
}
}
And my expected dataset need some replacement emp1 and emp2
<?xml version='1.0' encoding='UTF-8'?>
<dataset>
<company.entity_group ID="15155" corporate_name="comp1"/>
<company.entity_group ID="15156" corporate_name="comp2"/>
<company.entity_group ID="[emp1]" corporate_name="comp3"/>
<company.entity_group ID="[emp2]" corporate_name="comp3"/>
<company.ref_entity ID="1" entity_group_id="[emp1]"/>
<company.ref_entity ID="2" entity_group_id="[emp2]"/>
</dataset>
Use DatabaseAssertionMode.NO_STRICT, and delete the 'id' column from your 'expect.xml'.
DBUnit will ignore this column.