Is there any way to use key-class and value-class parameters for the Gemfire sink in Spring xd?
Regarding to documentation i can use only keyExpression but nothing about its class type. Same for the key-class.
I have such command for the Gemfire,
put --key-class = java.lang.String --value-class = Employee --key = ('id': '998') --value = ('id': 186, 'firstName': 'James', 'lastName': 'Goslinga') --region = replicated2
So i use --key-class and --value-class parameters in Gemfire.
But i cannot use them from Spring xd since there is only keyExpression parameter in Gemfire Sink.
Any idea to solve?
As far as I know the syntax above is not supported by native GemFire. So you can't do it out of the box with Spring XD. The syntax looks vaguely SQL-like. Are you using Gemfire XD? Is this something you wrote yourself?
The gemfire sink uses spring-integration-gemfire, allowing you to declare the keyExpression using SpEL. The value, using the gemfire sink, is always the payload. The SI gemfire outbound adapter wraps Region.put(key, value). The GemFire API supports typing via generics, i.e. Region<K,V> but this is not enforced in this case. GemFire RegionFactory allows keyConstraint and valueConstraint attributes to constrain types but this is part of the Region configuration which is external to Spring XD. Furthermore, none of this addresses the data binding in your example, e.g.,
Person p = ('id': 186, 'firstName': 'James', 'lastName': 'Goslinga')
This capability would require a custom sink module. If your command can be executed as a shell script, you might be able to use a shell sink to invoke it.
Thank you for your answer,
Maybe basically i can explain my problem in this way.
if i write following command to gemfire console i can create new entry in region which contains object of Employee class.
put --key-class=java.lang.String --value-class=Employee --key=('id':'998') --value=('id':186,'firstName':'James','lastName':'Goslinga') --region=replicated2
The think that i want to do is i will send data from spring-xd. And i will have a new object of Employee class in Gemfire.
If i create such stream which will get data from rabbit MQ and send it to gemfire.
stream create --name reference-data-import --definition "rabbit --outputType=text/plain | gemfire-json-server --host=MC0WJ1BC --regionName=region10 --keyExpression=payload.getField('id')" --deploy
I can see that data in this type of "com.gemstone.gemfire.pdx.internal.PdxInstanceImpl".
Regarding to spring-xd documentation i can use such parametter outputType=application/x-java-object;type=com.bar.Foo but i never managed to work it out even though i deploy my class.
if i can see a simple working example it will be great for me.
Related
I have this entry in application.properties of my spring boot app:
myapp.urls = url1,url2,url3
In a method in my component, I am creating an array like below:
String myArray[] = properties.getmyAppUrls().split(",");
I want this array creation logic execute only once. I know we can achieve this using post construct. Is there any other way we could achieve this like during server start up?
I want this array constructed reading from a properties file during server start up and i want to use this in my component.
You can use Spring EL to do the job:
#Value("#{'${myapp.urls}'.split(',')}")
private List<String> myAppUrls;
I'd recommend to move this to a Configuration class, then you can autowire it everywhere you need it.
My task is to read events from multiple different topics (class of all data in all topics is "Event"). This class contains field "data" (Map) which carries specific for each topic data, that can be deserialized to specific class (e.g. to "DeviceCreateEvent" or smth.). I can create consumers for each topic with #KafkaListener on methods with parameter type "Event". But in this case firstly i have to event.getData() and deserialize it into specific class, so I will get code duplication in all consumer methods. Is there any way to get in annotated consumer method already deserialized object to specific class?
It's not clear what you are asking.
If you have a different #KafkaListener for each topic/event type, and use JSON, the framework will automatically tell the message converter the type the data should be converted to; see the documentation.
Although the Serializer and Deserializer API is quite simple and flexible from the low-level Kafka Consumer and Producer perspective, you might need more flexibility at the Spring Messaging level, when using either #KafkaListener or Spring Integration. To let you easily convert to and from org.springframework.messaging.Message, Spring for Apache Kafka provides a MessageConverter abstraction with the MessagingMessageConverter implementation and its JsonMessageConverter (and subclasses) customization. You can inject the MessageConverter into a KafkaTemplate instance directly and by using AbstractKafkaListenerContainerFactory bean definition for the #KafkaListener.containerFactory() property. The following example shows how to do so: ...
On the consumer side, you can configure a JsonMessageConverter; it can handle ConsumerRecord values of type byte[], Bytes and String so should be used in conjunction with a ByteArrayDeserializer, BytesDeserializer or StringDeserializer. (byte[] and Bytes are more efficient because they avoid an unnecessary byte[] to String conversion). You can also configure the specific subclass of JsonMessageConverter corresponding to the deserializer, if you so wish.
How to get java.sql.Connection instance from the vert.x JDBC client from the current connection so that it can be used to retrieve the metadata of the tables/columns. Using "io.vertx" %% "vertx-jdbc-client-scala" % "3.5.1"
Instances of io.vertx.ext.sql.SQLConnection provide the unwrap to access the underyling java.sql.Connection.
When using the Scala-wrapper you have to get the underyling connection as the ùnwrap`-method is currently not exposed (we are discussing how we can provide such methods in the future).
use asJava to get the underyling object and then invoke unwrap directly:
val con:ìo.vert.scala.ext.sql.SQLConnection = ...
con.asJava.asInstanceOf[ìo.vert.ext.sql.SQLConnection].unwrap(...)
`
Customized grouping Tasks by topic name instead of partition Id.
How to refer my custom partition grouper class in my Kafka stream application ?
Thanks
You can set a custom partition grouper class using the StreamsConfig.PARTITION_GROUPER_CLASS_CONFIG option in your streams config.
However, as Matthias says, this is unadvisable unless you know what you're doing or want to learn :). Perhaps what you are trying to do can be accomplished some other way?
As a result of a bug in Kafka-Streams source code (version 2.1.0), you'll need in addition to add this configuration with a consumer prefix as follows:
Properties props = new Properties();
props.put(StreamsConfig.consumerPrefix(PARTITION_GROUPER_CLASS_CONFIG), CustomPartitionGrouper.class.getName());
props.put(StreamsConfig.PARTITION_GROUPER_CLASS_CONFIG, CustomPartitionGrouper.class.getName());
The reason for adding the consumer config prefix is that the StickyTaskAssignor and the PartitionGrouper instances are being initialized in the consumer initialization flow. Without the prefix, the consumer will ignore the PARTITION_GROUPER_CLASS_CONFIG and will use the default which is the DefaultPartitionGrouper class.
I have a rather large spring application, and all I'm trying to share is a single Map (using util.ConcurrentMap as implementation).
To do this, I created a bean in my appContext, and I tried to use the following tc-config line:
*/applicationContext.xml
Must I do something else to enable this to work? MyClass is a rather simple domain object that contains only primitives, two constructors, and accessors/mutators.
Must I do something else to get this working? I'm using Terracotta 3.0.0.
You need to create a tc-config.xml config file as described in http://www.terracotta.org/web/display/orgsite/Spring+Integration.