What is the difference between reactor.ipc.netty.tcp.TcpServer and reactor.netty.tcp.TcpServer? - reactor-netty

I was trying to implement TCPServer using Reactive Netty.
I see two TcpServer class : reactor.ipc.netty.tcp.TcpServer and reactor.netty.tcp.TcpServer
What is the difference between the two ?
I don't see reference to reactor.ipc.netty.tcp.TcpServer in documentation.

reactor.ipc.netty.tcp.TcpServer is the API for creating TCP server provided in Reactor Netty versions <= 0.7.x
In versions >= 0.8.x there was a change in the API so that you have now reactor.netty.tcp.TcpServer
Reactor Netty versions <= 0.7.x reached EOL so consider using versions >= 0.8.x

Related

can I use old spring-data-elasticsearch to connect to new elasticsearch?

Currently we are on spring-data-elasticsearch 3.2.6 + elasticsearch 6.8.
We are moving to new elasticsearch 7.x. Do I have to update spring-data-elasticsearch to 4.x? We only use ElasticsearchRepository in spring-data-elasticsearch. And we don't need to use any new feature in elasticsearch 7.x.
If we are moving to elasticsearch 8.x in the future, do I need update spring-data-elasticsearch ?
Update:
What Elasticsearch client does Spring-Data-Elasticsearch use under the hood?
All methods in the `ElasticsearchRepository` are deprecated. What should do I use?
I found some discussions in above threads. Here is my summary.
Operations with Templates:
ElasticsearchTemplate implements ElasticSearchOperation. It uses TransportClient(which is deprecated in ES 7 and has been removed in ES8)
ElasticsearchRestTemplate implements ElasticSearchOperation. It uses high level client(which is deprecated in ES 7.16.0. It will be removed in future. #Deprecated(since = "7.16.0", forRemoval = true) )
ReactiveElasticsearchTemplate implements ReactiveElasticsearchOperations. It uses Reactive Client.
Repository
ElasticsearchRepository uses TransportClient as default. All methods in the ElasticsearchRepository are deprecated now.
Reactive Elasticsearch repository builds on ReactiveElasticsearchOperations.
Due to underlying TransportClient or HigLevelRestClient has been deprecated, can I conclude that the correct way is to use Reactive Client(ReactiveElasticsearchTemplate or Reactive Elasticsearch repository) ?
The new Elasticsearch would be 8.
Val already put the link to the compatibility matrix in his comment.
Version 3.2.6 is pretty outdated (March 25 2020) and out of support since October 2020.
The first thing you can try is to see if your application works with a 7 cluster - although I doubt that, I can't tell you exactly what had changed in the API, but there was quite some stuff.
What you should not do is putting newer Elasticsearch libraries on the classpath than the ones that Spring Data Elasticsearch was built with, this will in most cases make problems.
But I'd recommend to upgrade your application anyway and keep it regularly up to date.
As for future upgrade to version 8: It is possible to send a compatibility header in your requests (this can be done in Spring Data Elasticsearch 4) and the Elasticsearch cluster should respond in a format that is compatible with a client expecting version 7. I wrote should, because it does not conform to this in every case - I reported one case that is fixed now. But I wouldn't rely on that.
Again, please update your application and keep it up to date, not only because of Spring Data Elasticsearch, but also because these updates always include bug and/or security fixes.

Using Java functional API with Spring Cloud Data Flow and Polled Consumers

I am working on a project that is trying to use the polled consumer API. However, existing documentation, blog posts and sample code seems to use deprecated annotations (such as org.springframework.cloud.stream.annotation.Input). This seems to be because they are relying on the older style of Spring Cloud stream applications rather than using Java functional api (e.g., java.util.function.Function), as shown in other examples such as this one, given in the same repo.
Is there a way to use functional style with polled consumers in Spring Cloud Stream?
You are using outdated documentation. The most current is available from the project site - https://spring.io/projects/spring-cloud-stream#learn.
The section you are looking for is - https://docs.spring.io/spring-cloud-stream/docs/3.1.5/reference/html/spring-cloud-stream.html#spring-cloud-streams-overview-using-polled-consumers

getting kafka error : MetadataRequest versions older than 4 don't support the allowAutoTopicCreation field

I developed spring-boot application using boot-version 2.1.3.RELEASE. Added kafka-client, spring-fafka, kafka_2.12 and kafka-streams dependencies without specific version. Application is intended to open stream from a kafka topic and do count aggregation by grouping on keys at timedwindow. While in debug mode, following error is logged.
org.apache.kafka.common.errors.UnsupportedVersionException: MetadataRequest versions older than 4 don't support the allowAutoTopicCreation field
2019-10-18 09:18:05.050 DEBUG 6435 --- [0c5acc95c-admin] o.a.k.clients.admin.KafkaAdminClient : [AdminClient clientId=CAG__CNTS_service_ads_2-d1d85a17-42e5-4d98-9ef8-ed90c5acc95c-admin] Call(callName=topicsMetadata, deadlineMs=1571370604855) failed with non-retriable exception after 1 attempt(s)
java.lang.Exception: UnsupportedVersionException: MetadataRequest versions older than 4 don't support the allowAutoTopicCreation field
at org.apache.kafka.clients.admin.KafkaAdminClient$Call.fail(KafkaAdminClient.java:612) ~[kafka-clients-2.0.1.jar:na]
at >org.apache.kafka.clients.admin.KafkaAdminClient$AdminClientRunnable.handleResponses(KafkaAdminClient.java:984) [kafka-clients-2.0.1.jar:na]
at >org.apache.kafka.clients.admin.KafkaAdminClient$AdminClientRunnable.run(KafkaAdminClient.java:1124) [kafka-clients-2.0.1.jar:na]
at java.lang.Thread.run(Thread.java:748) [na:1.8.0_172]
Boot 2.1.x (current is 2.1.9) uses spring-kafka 2.2.x and it uses the 2.0.1 kafka clients by default. See the project page for the compatibility matrix. While you can generally use newer clients with older brokers (since 0.10), of course you can only use features that the broker supports.
0.10.x.x is simply too old for newer spring-kafka versions.

Are we vulnerable to Pivotal Spring Framework Vulnerability CVE-2018-1270?

We are using below list of Spring Framework jar files in the application and we are not using Spring’s websocket or any other form of websocket dependencies in the application and no code references of enabling STOMP support.
Could someone please confirm if we are still vulnerable to CVE-2018-1270?
spring-security-web-4.2.3.RELEASE.jar
spring-security-core-4.2.3.RELEASE.jar
spring-security-config-4.2.3.RELEASE.jar
spring-aspects-4.3.12.RELEASE.jar
spring-web-4.3.12.RELEASE.jar
spring-tx-4.3.12.RELEASE.jar
spring-orm-4.3.12.RELEASE.jar
spring-jdbc-4.3.12.RELEASE.jar
spring-expression-4.3.12.RELEASE.jar
spring-core-4.3.12.RELEASE.jar
spring-context-support-4.3.12.RELEASE.jar
spring-context-4.3.12.RELEASE.jar
spring-beans-4.3.12.RELEASE.jar
spring-jms-4.3.12.RELEASE.jar
spring-messaging-4.3.12.RELEASE.jar
spring-aop-4.3.12.RELEASE.jar
All versions of the 4.3 branch (up to 4.3.15) are affected, you should upgrade to 4.3.16 as recommended. Further information here: https://pivotal.io/security/cve-2018-1270
This only affects you if you use STOMP over WebSockets, otherwise you are not affected.

Spring XD on YARN: ver 1.2.1 direct binding support for kafka source

Spring XD on YARN: ver 1.2.1 direct binding support for kafka source.
1.I know this is not supported yet(as of ver 1.3.0), any definite date/ver would help our project schedule ?
2.This direct binding for kafka source support is very critical for our project. We are in a situation to totally abandon Spring XD YARN in our project just because of this.
Trying to do
stream create --name directkafkatohdfs --definition "kafka | hdfs"
stream deploy directkafkatohdfs --properties "module.*.count=0"
Hitting the exception "must be a positive number. 0-count kafka sources are not currently supported"
I just want to eliminate the use of message bus/transport(redis/kafka/rabbitMQ) and want to have a direct binding of source(kafka) and sink(sink) in the same YARN container.
1.I know this is not supported yet(as of ver 1.3.0), any definite date/ver would help our project schedule.
2.This direct binding for kafka source support is very critical for our project. We are in a situation to totally abandon Spring XD YARN in our project just because of this.
Thanks
Satish Srinivasan
satsrinister#gmail.com
Thanks for the interest in Spring XD :).
For Spring XD 1.x, we suggest using composition instead of direct binding with the Kafka bus - or, in your case, the Kafka source. However, apart from that, in Spring XD 1.x it is not possible to create an entire stream without at least one hop over the bus (regardless of the type of bus or modules being used).
We are addressing direct binding (including support for entire directly bound streams) as part of Spring Cloud Data Flow (http://cloud.spring.io/spring-cloud-dataflow/) - which is the next evolution of Spring XD. We are intending to support it as a specific configuration option, rather than as a side-effect of zero-count modules. From an end-user perspective, SCDF supports the same DSL as Spring XD (with minor variations) and has the same administration UI, and definitely supports YARN, so it should be a fairly seamless transition. I would suggest starting to take a look at that. The upcoming 1.0.0.M2 release of Spring Cloud Data Flow will not support direct binding via DSL yet, but the intent is to support it in the final release which is currently planned for Q1 2016.

Resources