use of Jdbctemplate in spring boot application for AWS Dynamodb - spring-boot

I am building a spring boot microservices pointing to AWS dynamodb for database operation. I am using below library to interact: As such operations are working fine with CrudRepository, but it is very slow for mass operation. Do we any option like jdbcTemplate.batchUpdate() for dynamodb?
<dependency>
<groupId>com.amazonaws</groupId>
<artifactId>aws-java-sdk-core</artifactId>
</dependency>
<dependency>
<groupId>com.amazonaws</groupId>
<artifactId>aws-java-sdk-dynamodb</artifactId>
</dependency>
<dependency>
<groupId>com.github.derjust</groupId>
<artifactId>spring-data-dynamodb</artifactId>
<version>4.5.0</version>
</dependency>

The batchSave available on spring-data-dynamodb uses AWS SDK DynamoDBMapper batchSave which doesn't update the existing item.
DynamoDBMapper.batchSave internally calls AmazonDynamoDB.batchWriteItem() which doesn't support updating the existing item.
DynamoDBMapper batchSave:-
Saves the objects given using one or more calls to the
AmazonDynamoDB.batchWriteItem(BatchWriteItemRequest) API. No version
checks are performed, as required by the API.
This method ignores any SaveBehavior set on the mapper, and always
behaves as if SaveBehavior.CLOBBER was specified, as the
AmazonDynamoDB.batchWriteItem() request does not support updating
existing items.

Related

Setting up Flapdoodle so that it stores documents on the disk (+ caching)

I have a Spring Boot project for which I use Flapdoodle. Flapdoodle is used normally for testing, but I use it for development as it starts with the application itself and is comfortable to use. I just added it as dependency but without scope "test".
<dependency>
<groupId>de.flapdoodle.embed</groupId>
<artifactId>de.flapdoodle.embed.mongo</artifactId>
<version>3.4.11</version>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-data-mongodb</artifactId>
</dependency>
I've also set the spring.mongodb.embedded = 4.4.2 as the application doesn't start if this isn't set apperently.
Everything works fine. I can create collections and read and write documents using MongoTemplate.
However I'd be interested to know how I can switch the storaging behavior of the database: not in-memory anymore but on the disk ( + caching the most recent documents if possible). In this way the data isn't lost between restarts.
How can I set up the YAML configuration to enable this behavior?

spring boot 2: actuator/health endpoint is taking more time

In one of my service /actuator/health endpoint is taking more time (approximately 9 seconds). I am using following dependencies, how to debug this?
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-web</artifactId>
<exclusions>
<exclusion>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-logging</artifactId>
</exclusion>
<exclusion>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-tomcat</artifactId>
</exclusion>
</exclusions>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-jetty</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-actuator</artifactId>
</dependency>
Spring boot version used: 2.0.3.RELEASE
Thanks,
Hari
Basically health endpoint is implemented in a way that it contains a list of all Spring beans that implement the interface HealthIndicator.
Each health indicator is responsible for supplying a health information about one subsystem (examples of such subsystem are:disk, postgres, mongo, etc.), spring boot comes with some predefined HealthIndicators.
So that when the health endpoint is invoked, it iterates through this list and gets the information about each subsystem and then constructs the answer.
Hence you can place a break point in relevant health indicators (assuming you know which subsystems are checked) and see what happens.
If you're looking for the HTTP entry point - the code that gets called when you call http://<host-port>/health (can vary depending on your settings but you get the idea)`, it can be found here
Yet another approach that comes to mind is disabling "suspicious" health check and finding the slow one by elimination.
For example, if you have an elastricsearch and would like to disable it, use in the application.properties:
management.health.elasticsearch.enabled = false
On top of Mark's answer, setting this property in your application.properties / application.yml
management.endpoint.health.show-details=always
Will help in identifying which components comprise the health check,
as the GET /actuator/health will yield more details
On top of Mark's great answer:
As an alternative to synchronously running all your health checks when the /health endpoint is called (which can only lead to longer and longer execution time as you add more health checks), you can make your HealthIndicators run asynchronously using the library spring-boot-async-health-indicator (for Spring Boot >= 2.2) by annotating them with #AsyncHealth.
On top of making the /health endpoint return immediately (as returning healths calculated on separate threads), it also includes the execution time of each asynchronous health() method execution as an additional detail which dramatically helps in figuring out which underlying service responds slowly on production systems.
disclaimer: I wrote this library to help solve multiple limitations of the existing HealthIndicator system, including this one

Quarkus GraphQL: How to change the default endpoint?

I am using the dependency as shown below in a Quarkus application. Default the endpoint is /graphql. But since I am running this application in a k8s environment behind a ingress, this is not ideal. Anyone has an idea how to change this default endpoint to something like: /<service-name>/graphql?
<dependency>
<groupId>io.smallrye</groupId>
<artifactId>smallrye-graphql-servlet</artifactId>
<version>1.0.1</version>
</dependency>
If you're using the SmallRye GraphQL extension, you can control the endpoint path using application.properties:
quarkus.smallrye-graphql.root-path=/my-path-to-graphql
You can also use variables (with ${variableName} syntax) in the value, so you can inject your service name there.
But to be using that extension you need to adjust the dependency to
<dependency>
<groupId>io.quarkus</groupId>
<artifactId>quarkus-smallrye-graphql</artifactId>
</dependency>
Note that it's only available since Quarkus 1.5.0.

How to supress windowed aggregation result in Spring Cloud Kafka Streams?

I am using Kafka-streams-binder in my Spring Cloud project. The Kafka stream application uses sliding window of 6 minutes to aggregate the results and analyze patter. But the problem is that the aggregation operation is generating duplicate results.
I want to suppress the intermediate results and publish only after the window ends in the application. This can be achieved by Kafka .supress() operation in Kafka 2.1.1. But the Spring Cloud version does not have the latest kafka to use the capability.
Dependencies used by Project
<spring-boot.version>2.1.9.RELEASE</spring-boot.version>
<spring-cloud.version>Greenwich.SR3</spring-cloud.version>
Any alternatives for suppressing the intermediate results would be helpful.
Any alternatives for suppressing the intermediate results would be helpful.
There is no equivalent functionality available in prior versions of Kafka Streams that give you the same behavior as the recently introduced Suppress feature.
The closest you can get is to configure your Kafka Streams application's record caches (settings like cache.max.bytes.buffering) and the commit.interval.ms to reduce the number of "intermediate" updates you will be seeing. But this will not fully remove any such updates unlike the new Suppress feature.
You can override the kafka-clients and kafka-streams versions, as described in the appendix to the Spring for Apache Kafka reference manual.
If you are not using the embedded kafka broker in tests, you just need to override the kafka clients and streams.
<dependency>
<groupId>org.apache.kafka</groupId>
<artifactId>kafka-clients</artifactId>
<version>2.1.1</version>
</dependency>
<dependency>
<groupId>org.apache.kafka</groupId>
<artifactId>kafka-streams</artifactId>
<version>2.1.1</version>
</dependency>

What could be be injecting an extra xmlns into a SOAP response?

I inherited a web service that used to work fine until we had to upgrade the runtime environment (from JBOSS/JRE6 to Tomcat7/JRE7). There was no code change except for the pom.xml!
In fact it still works fine, except that the many existing clients can no longer handle the response because of an extra namespace attribute now present in one of the elements (of the response).
That is, previously (before the migration) that element (in the SOAP response) used to be:
<OurResponse xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:noNamespaceSchemaLocation="OurResponse.xsdXMLSchema-instance"
ourresponseVersion="M1m2v03" xmlns="">
And now it is:
<v01:OurResponse acknowledgementVersion="M1m2v03"
xmlns:v01="http://webservice.ourdomain.com/projone/modtwo/M1m2v03">
Since there was no code change involved, I am baffled by this (minor but critical) change in the SOAP response.
In particular, I am trying to understand:
Which part of the build system changes this namespace attribute?
How do I restore it back to previous behavior?
Why would the clients break on such a minor change? (i.e. the content of the response is identical!)
The only relevant changes I have been able to spot in the pom.xml are:
Adding the following dependencies:
<dependency>
<groupId>org.bouncycastle</groupId>
<artifactId>bcprov-jdk16</artifactId>
<version>1.46</version>
</dependency>
<dependency>
<groupId>net.sf.ehcache</groupId>
<artifactId>ehcache</artifactId>
<version>2.7.4</version>
</dependency>
<dependency>
<groupId>org.springframework</groupId>
<artifactId>spring-web</artifactId>
<version>3.0.7.RELEASE</version>
<exclusions>
<exclusion>
<groupId>net.sf.ehcache</groupId>
<artifactId>ehcache</artifactId>
</exclusion>
</exclusions>
</dependency>
Updating the cxf-rt-frontend-jaxws dependency from version 2.2.7 to 2.7.7.
Updating the cxf-rt-transports-http dependency from version 2.2.7 to 2.7.7.
Updating the cxf-rt-ws-security dependency from version 2.2.7 to 2.7.7.
Adding the following dependencies:
<dependency>
<groupId>org.apache.cxf</groupId>
<artifactId>cxf-rt-core</artifactId>
<version>2.7.7</version>
</dependency>
<dependency>
<groupId>org.apache.cxf</groupId>
<artifactId>cxf-rt-databinding-aegis</artifactId>
<version>2.7.7</version>
</dependency>
<dependency>
<groupId>org.apache.cxf</groupId>
<artifactId>cxf-rt-management</artifactId>
<version>2.7.7</version>
</dependency>
Again, I am assuming there is some internal change in one of the frameworks involved (CXF? Spring?) that handles this internally. If this assumption is correct, then:
Which part of the build system changes this namespace attribute?
How do I restore it back to previous behavior?
Why would the clients break on such a minor change? (i.e. the content of the response is identical!)
Update 1:
The culprit turned out to be the org.apache.cxf packages version change from 2.2.7 to 2.7.7.
Looks like newer is not always better... unless there is a way to programmatically to force the legacy behavior of stripping out the namespace prefixes?
Update 2: Using CXF 2.2.7 on Tomcat7/JRE7 had the side-effect of killing the Tomcat server after sending a single SOAP message (seems to be related to SSL).
The fact that a venerable server like Tomcat can die due to a single rogue .war package is pretty disturbing but since I cannot fix Tomcat and I have not found a programmatic way to workaround the implicit namespace prefix issue, I tried various stable CXF releases that would exhibit the legacy behavior without killing Tomcat.
I tried versions 2.7.1 and 2.6.10 but eventually only 2.5.9 worked.
I hope this helps someone who stumbles on a similar problem.
Conforming XML implementation are not permitted to die due to changes in the use of xmlns attributes. Expressing the same data model with a prefix or without, it's the same thing. If your client failed, you need to fix the client. If you have clients that are hypersensitive to the use of namespace prefixes instead of to the real data model, CXF is not necessarily a good choice.
Most likely CXF upgraded to a more recent version of JAX-B, and it changed its mind about the namespace prefixes.
To elaborate this: Apache CXF was designed to focus on standard-conforming web services. Apache Axis has traditionally filled the space for not-so-standard-conforming web services, just fine. So no, the CXF development community has never worried about 'prefix stability'. If the XML is formally correct, CXF tests are happy.
For this, and many other reasons, CXF delegates XML generation for JAX-B web services to the official JAX-B reference implementation. New versions of CXF pick up new versions of JAX-B. JAX-B, from time to time, makes changes that have the effect of rearranging the namespace prefixes.
The XML generation in CXF is pluggable, so if you want to use an older JAX-B, or roll your own, you can. You can provide a 'Provider' and do the whole job yourself if you like.
There is an option in CXF to pass an object into JAX-B that decides what prefix to use for what namespace, but I don't think that it can be used to force a particular namespace to be defaulted. You might be able to get what you want with a Provider and a carefully configured call to the JAX-B API.
The CXF User mailing list archive has hundreds of messages to and from people who are swimming upstream with namespace prefixes.
(As for tomcat dying, well, that's another question.)

Resources