I am using Kafka-streams-binder in my Spring Cloud project. The Kafka stream application uses sliding window of 6 minutes to aggregate the results and analyze patter. But the problem is that the aggregation operation is generating duplicate results.
I want to suppress the intermediate results and publish only after the window ends in the application. This can be achieved by Kafka .supress() operation in Kafka 2.1.1. But the Spring Cloud version does not have the latest kafka to use the capability.
Dependencies used by Project
<spring-boot.version>2.1.9.RELEASE</spring-boot.version>
<spring-cloud.version>Greenwich.SR3</spring-cloud.version>
Any alternatives for suppressing the intermediate results would be helpful.
Any alternatives for suppressing the intermediate results would be helpful.
There is no equivalent functionality available in prior versions of Kafka Streams that give you the same behavior as the recently introduced Suppress feature.
The closest you can get is to configure your Kafka Streams application's record caches (settings like cache.max.bytes.buffering) and the commit.interval.ms to reduce the number of "intermediate" updates you will be seeing. But this will not fully remove any such updates unlike the new Suppress feature.
You can override the kafka-clients and kafka-streams versions, as described in the appendix to the Spring for Apache Kafka reference manual.
If you are not using the embedded kafka broker in tests, you just need to override the kafka clients and streams.
<dependency>
<groupId>org.apache.kafka</groupId>
<artifactId>kafka-clients</artifactId>
<version>2.1.1</version>
</dependency>
<dependency>
<groupId>org.apache.kafka</groupId>
<artifactId>kafka-streams</artifactId>
<version>2.1.1</version>
</dependency>
Related
I'm trying to connect Hibernate Reactive and Spring WebFlux(Project Reactor most).
The problem is that Uni<>(Hibernate Reactive type) replaces Mono<> (Reactive type from Project Reactor), and from now, behaviour is not such obviously, as Project Reactor provides without other reactive types.
Is there are some tools for compatibility between Uni<> and Reactor's Mono<>/Flux<>?
Already Investigated GitHub repos, tried to connect reactive types via custom spring starters.
Yes, there is support to convert between the two type systems.
Add the following dependency...
<dependency>
<groupId>io.smallrye.reactive</groupId>
<artifactId>mutiny-reactor</artifactId>
<version>1.7.0</version>
</dependency>
...and use the following code:
Mono<T> monoFromUni = uni.convert().with(UniReactorConverters.toMono());
You can find detailed documentation here: https://smallrye.io/smallrye-mutiny/1.7.0/guides/converters/
I have a Spring Boot project for which I use Flapdoodle. Flapdoodle is used normally for testing, but I use it for development as it starts with the application itself and is comfortable to use. I just added it as dependency but without scope "test".
<dependency>
<groupId>de.flapdoodle.embed</groupId>
<artifactId>de.flapdoodle.embed.mongo</artifactId>
<version>3.4.11</version>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-data-mongodb</artifactId>
</dependency>
I've also set the spring.mongodb.embedded = 4.4.2 as the application doesn't start if this isn't set apperently.
Everything works fine. I can create collections and read and write documents using MongoTemplate.
However I'd be interested to know how I can switch the storaging behavior of the database: not in-memory anymore but on the disk ( + caching the most recent documents if possible). In this way the data isn't lost between restarts.
How can I set up the YAML configuration to enable this behavior?
what is the best way to pass data from mysql to kafka produce
I know there is good kafka-connector.
However I wanna use springboot framework. because I need to transform the data and merge with other http result then I want to produce data to kafka.
so is there any good example, github, blog wahtever~?
The easiest way is to use Camel and the camel-debezium component. An example found in the docs is the following. You may use all Camel features to do any kind of transformation or enrichment, before pushing to a Kafka topic. I think that can be a legit choice if the source data in mysql do not represent your source of truth and you need to filter/ transform before pushing to Kafka.
Listen for events:
from("debezium-mysql:dbz-test-1?offsetStorageFileName=/usr/offset-file-1.dat&databaseHostName=localhost&databaseUser=debezium&databasePassword=dbz&databaseServerName=my-app-connector&databaseHistoryFileName=/usr/history-file-1.dat")
.log("Event received from Debezium : ${body}")
.log(" with this identifier ${headers.CamelDebeziumIdentifier}")
.log(" with these source metadata ${headers.CamelDebeziumSourceMetadata}")
.log(" the event occured upon this operation '${headers.CamelDebeziumSourceOperation}'")
.log(" on this database '${headers.CamelDebeziumSourceMetadata[db]}' and this table '${headers.CamelDebeziumSourceMetadata[table]}'")
.log(" with the key ${headers.CamelDebeziumKey}")
.log(" the previous value is ${headers.CamelDebeziumBefore}")
Produce to Kafka after you read/ transform:
.to("kafka:{{topic}}")
Also, check out the Spring boot guide:
<dependency>
<groupId>org.apache.camel.springboot</groupId>
<artifactId>camel-debezium-mysql-starter</artifactId>
<version>x.x.x</version>
<!-- use the same version as your Camel core version -->
</dependency>
An detailed github example that includes Camel transformations is here.
Maybe a naive question, but can anyone provide me the sbt dependency for KSQL?
I checked on Maven, but couldn't find any.
Is the dependency hosted some place other than Maven, if yes what would be the revolver I will have to add in my build.sbt file?
I'm trying to write a Scala app which uses Ksql to query on some kafka topics to create a dashboard with some metrics.
None of the Confluent dependencies are in Maven Central
See
https://docs.confluent.io/current/installation/clients.html#maven-repository-for-jars
And I think this is the KSQL client target
<dependency>
<groupId>io.confluent.ksql</groupId>
<artifactId>ksql-engine</artifactId>
</dependency>
Example Java code - https://github.com/confluentinc/ksql/tree/master/ksqldb-examples/src/main/java/io/confluent/ksql/embedded
You don't need to embed KSQL in your code, though. It's meant to run independently on the KSQL Server, which you can submit from code or use the KSQL CLI. In your application, you'd use a regular consumer or Kafka Streams API directly
I would suggest trying the new Scala Kafka Streams wrapper, too
I inherited a web service that used to work fine until we had to upgrade the runtime environment (from JBOSS/JRE6 to Tomcat7/JRE7). There was no code change except for the pom.xml!
In fact it still works fine, except that the many existing clients can no longer handle the response because of an extra namespace attribute now present in one of the elements (of the response).
That is, previously (before the migration) that element (in the SOAP response) used to be:
<OurResponse xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:noNamespaceSchemaLocation="OurResponse.xsdXMLSchema-instance"
ourresponseVersion="M1m2v03" xmlns="">
And now it is:
<v01:OurResponse acknowledgementVersion="M1m2v03"
xmlns:v01="http://webservice.ourdomain.com/projone/modtwo/M1m2v03">
Since there was no code change involved, I am baffled by this (minor but critical) change in the SOAP response.
In particular, I am trying to understand:
Which part of the build system changes this namespace attribute?
How do I restore it back to previous behavior?
Why would the clients break on such a minor change? (i.e. the content of the response is identical!)
The only relevant changes I have been able to spot in the pom.xml are:
Adding the following dependencies:
<dependency>
<groupId>org.bouncycastle</groupId>
<artifactId>bcprov-jdk16</artifactId>
<version>1.46</version>
</dependency>
<dependency>
<groupId>net.sf.ehcache</groupId>
<artifactId>ehcache</artifactId>
<version>2.7.4</version>
</dependency>
<dependency>
<groupId>org.springframework</groupId>
<artifactId>spring-web</artifactId>
<version>3.0.7.RELEASE</version>
<exclusions>
<exclusion>
<groupId>net.sf.ehcache</groupId>
<artifactId>ehcache</artifactId>
</exclusion>
</exclusions>
</dependency>
Updating the cxf-rt-frontend-jaxws dependency from version 2.2.7 to 2.7.7.
Updating the cxf-rt-transports-http dependency from version 2.2.7 to 2.7.7.
Updating the cxf-rt-ws-security dependency from version 2.2.7 to 2.7.7.
Adding the following dependencies:
<dependency>
<groupId>org.apache.cxf</groupId>
<artifactId>cxf-rt-core</artifactId>
<version>2.7.7</version>
</dependency>
<dependency>
<groupId>org.apache.cxf</groupId>
<artifactId>cxf-rt-databinding-aegis</artifactId>
<version>2.7.7</version>
</dependency>
<dependency>
<groupId>org.apache.cxf</groupId>
<artifactId>cxf-rt-management</artifactId>
<version>2.7.7</version>
</dependency>
Again, I am assuming there is some internal change in one of the frameworks involved (CXF? Spring?) that handles this internally. If this assumption is correct, then:
Which part of the build system changes this namespace attribute?
How do I restore it back to previous behavior?
Why would the clients break on such a minor change? (i.e. the content of the response is identical!)
Update 1:
The culprit turned out to be the org.apache.cxf packages version change from 2.2.7 to 2.7.7.
Looks like newer is not always better... unless there is a way to programmatically to force the legacy behavior of stripping out the namespace prefixes?
Update 2: Using CXF 2.2.7 on Tomcat7/JRE7 had the side-effect of killing the Tomcat server after sending a single SOAP message (seems to be related to SSL).
The fact that a venerable server like Tomcat can die due to a single rogue .war package is pretty disturbing but since I cannot fix Tomcat and I have not found a programmatic way to workaround the implicit namespace prefix issue, I tried various stable CXF releases that would exhibit the legacy behavior without killing Tomcat.
I tried versions 2.7.1 and 2.6.10 but eventually only 2.5.9 worked.
I hope this helps someone who stumbles on a similar problem.
Conforming XML implementation are not permitted to die due to changes in the use of xmlns attributes. Expressing the same data model with a prefix or without, it's the same thing. If your client failed, you need to fix the client. If you have clients that are hypersensitive to the use of namespace prefixes instead of to the real data model, CXF is not necessarily a good choice.
Most likely CXF upgraded to a more recent version of JAX-B, and it changed its mind about the namespace prefixes.
To elaborate this: Apache CXF was designed to focus on standard-conforming web services. Apache Axis has traditionally filled the space for not-so-standard-conforming web services, just fine. So no, the CXF development community has never worried about 'prefix stability'. If the XML is formally correct, CXF tests are happy.
For this, and many other reasons, CXF delegates XML generation for JAX-B web services to the official JAX-B reference implementation. New versions of CXF pick up new versions of JAX-B. JAX-B, from time to time, makes changes that have the effect of rearranging the namespace prefixes.
The XML generation in CXF is pluggable, so if you want to use an older JAX-B, or roll your own, you can. You can provide a 'Provider' and do the whole job yourself if you like.
There is an option in CXF to pass an object into JAX-B that decides what prefix to use for what namespace, but I don't think that it can be used to force a particular namespace to be defaulted. You might be able to get what you want with a Provider and a carefully configured call to the JAX-B API.
The CXF User mailing list archive has hundreds of messages to and from people who are swimming upstream with namespace prefixes.
(As for tomcat dying, well, that's another question.)