Spring XD - instanceof in SpEL expression in filter - spring-xd

I'm trying to use the instanceof operator in a SpEL expression in a filter for a stream. I'm trying the following:
stream create myStream --definition "tap:job:jobName > filter --expression='payload instanceof T(com.package.name.event.SomeEvent)' | log --deploy
I am publishing my own event to the xd.job.aggregatedEvents channel. My intention is to log only my SomeEvent's by filtering using the instanceof operator.
The problem is I am getting the following error:
org.springframework.expression.spel.SpelEvaluationException: EL1005E:(pos 0): Type cannot be found 'com.package.name.event.SomeEvent'
My question is can anyone advise me as to the proper syntax for instanceof in SpEL expressions? Or if this is correct syntax then what the problem might be?

According to the StackStrace - Type cannot be found and the logic from the StandardTypeLocator:
try {
return ClassUtils.forName(nameToLookup, this.classLoader);
}
catch (ClassNotFoundException ey) {
// try any registered prefixes before giving up
....
throw new SpelEvaluationException(SpelMessage.TYPE_NOT_FOUND, typeName);
You just end up with the issue like ClassNotFoundException. So, your jar with the com.package.name.event.SomeEvent is outside of the XD CLASSPATH.
You can compare it using literal:
--expression='payload.class.name == '''com.package.name.event.SomeEvent''''
Or just place your jar to the dirt container classpath.
From other side it is always a bad idea to use domain-specific types for messaging systems. You should think how to overcome that using standard supported types and replace the check condition to some value (in the standard type) to the message headers.

Related

KStream/KTable leftjoin after transform() leads to : StreamsException: A serializer is not compatible to the actual key or value type

I wrote the following DSL:
myStream
.leftJoin(myKtable, new MyValueJoiner())
.groupByKey(Grouped.with(Serdes.String(),MyObject.serde()))
.reduce((v1, v2) -> v2, Materialized.as("MY_STORE"))
.toStream()
This works correctly, the leftjoin() is ok, and the reduce() is well materialized as a state store on which I can perform put() and delete().
However if I wrote MyTranformer class implementing the Transformer interface and do the following:
myOtherStream.transform(() -> new MyTransformer<>(), MY_STORE)
.leftJoin(myOtherKTable, new MyOtherValueJoiner<>());
Then I get the following exception:
Caused by: org.apache.kafka.streams.errors.StreamsException: A serializer (key: org.apache.kafka.common.serialization.StringSerializer / value: org.apache.kafka.common.serialization.StringSerializer) is not compatible to the actual key or value type (key type: java.lang.String / value type: com.MyObject). Change the default Serdes in StreamConfig or provide correct Serdes via method parameters.
at org.apache.kafka.streams.processor.internals.SinkNode.process(SinkNode.java:94)
From the Javadoc, leftjoin seems to use the default serdes, and it seems there is no way to force usage of custom serdes, as it is however possible for other operands.
However, if I do something else than a leftjoin() after transform(), like a mapValue() or filter(), it works as expected. But as soon as I perform a leftjoin() I encounter the cast exception.
Can I use a leftjoin() after a transform() ?
Why in the first case the leftjoin() works even if the doc says it uses the default serdes, whereas with the transformer it fails ?
Finally I simply missed in the Javadoc the leftJoin() taking a Joined argument to specify the serdes to be used:
myOtherStream.transform(() -> new MyTransformer<>(), MY_STORE)
.leftJoin(myOtherKTable, new MyOtherValueJoiner<>(), Joined.with(JoinedKey.serde(), JoinedValue.serde(), JoinedValueOutput.serde()));

How can I manually evaluate the expression in a Spring #Value annotation?

My SpringBoot application has a bunch of #Value annotations. When the application is deployed to our Kubernetes cluster, it ends up using a properties file that is interpolated through a couple of different mechanisms. When it finally gets there, if developers make simple mistakes, the container can fail to start up, simply because they didn't set all the properties correctly. It's not easy to discover that this is what happened, until well after the mistake is made.
Note that virtually all of these #Value annotations will use the "${}" syntax, as opposed to "#{}". The primary concern is reading particular properties from a properties file, not Spring bean properties.
So, what I want to write is a little validation script (a small Java class), which does something like this:
Obtain the path to the generated properties file
Load that properties file into a Properties object
Scan the classpath for all classes (with a base package), and all fields in those classes, for #Value annotations
For each found #Value annotation, do some simple validation and evaluate the expression
If the validation or the evaluation fails, print an error message with all relevant details.
This script will run before the "kubectl rollout" happens. If we see these error messages before the rollout, we will save time diagnosing these problems.
I've been able to achieve everything so far except doing something with the loaded properties file and evaluating the expression. I know that Spring uses a bean postprocessor, but I don't know how I can manually call that.
Any idea how to fulfill that missing link?
Update:
I still don't have an answer to this.
I was thinking that perhaps the answer would be found in a BeanPostProcessor in the Spring codebase, so I cloned the spring-framework repo. I found a couple of potential ones, being "AutowiredAnnotationBeanPostProcessor", "BeanFactoryPostProcessor", "CommonAnnotationBeanPostProcessor", and "BeanPostProcessor", but I just don't see anything in any of these that looks like evaluating the expression in the Value annotation. I would have tried setting a breakpoint in the "value()" method of the annotation, but of course you can't set a breakpoint in a method like that.
Update:
To be clear, this expression is not a "Spring EL" expression. Those reference bean properties (or loaded properties) and begin with "#{". I'm working with expressions that just reference properties, which begin with "${". I did try parsing the expression with Spring EL, but it just thinks there's nothing there.
I've managed to figure this out. The key is the "PropertyPlaceholderHelper.replacePlaceholders(String, Properties)" method. Using that, I developed something like this:
PropertyPlaceholderHelper propertyPlaceholderHelper =
new PropertyPlaceholderHelper("${", "}", ":", true);
ClassPathScanningCandidateComponentProvider scanner = new ClassPathScanningCandidateComponentProvider(true);
boolean foundAtLeastOneUnfoundProperty = false;
for (BeanDefinition bd : scanner.findCandidateComponents(basePackage)) {
String beanClassName = bd.getBeanClassName();
Class<?> clazz = Class.forName(beanClassName);
for (Field field : clazz.getDeclaredFields()) {
Value valueAnnotation = field.getAnnotation(Value.class);
if (valueAnnotation != null) {
Matcher matcher = propertyRefPattern.matcher(valueAnnotation.value());
if (matcher.matches()) {
String resultingValue = propertyPlaceholderHelper.replacePlaceholders(valueAnnotation.value(), properties);
if (resultingValue.equals(valueAnnotation.value())) {
// This means that the property was not found.
System.out.println("ERROR: Expression \"" + valueAnnotation.value() +
"\" on field \"" + field.getName() + "\" in class \"" + beanClassName +
"\" references a property which is not defined.");
foundAtLeastOneUnfoundProperty = true;
}
}
}
}
}

how to choose a field value from a specific stream in storm

public void execute(Tuple input) {
Object value = input.getValueByField(FIELD_NAME);
...
}
When calling getValueByField, how do I specify a particular stream name emitted by previous Bolt/Spout so that particular FIELD_NAME is coming from that stream?
I need to know this because I'm facing the following exception:
InvalidTopologyException(msg:Component: [bolt2-name] subscribes from non-existent stream: [default] of component [bolt1-name])
So, I want to specify a particular stream while calling getValueBy... methods.
I don't remember a way of doing it on a tuple, but you can get the information of who sent you the tuple:
String sourceComponent = tuple.getSourceComponent();
String streamId = tuple.getSourceStreamId();
Then you can use a classic switch/case in java to call a specific method that will know which fields are available.
You can also iterate through fields included in your tuple to check if the field is available but I find this way dirty.
for (String field : tuple.getFields()) {
// Check something on field...
}
Just found out that the binding to a specific stream could be done while building topology.
The Spout could declare fields to a stream (in declareOutputFields method)
declarer.declareStream(streamName, new Fields(field1, field2));
...and emit value to the stream
collector.emit(streamName, new Values(value1, value2...), msgID);
When Bolt is being added in the topology, it could subscribe to a specific stream from preceding spout or bolt like following
topologyBuilder.setBolt(boltId, new BoltClass(), parallelismLevel)
.localOrShuffleGrouping(spoutORBoltID, streamID);
The overloaded version of the method localOrShuffleGrouping provides an option to specify streamID as last argument.

Where does Grail's errors property come from?

Grails has a bug with regards to databinding in that it throws a cast exception when you're dealing with bad numerical input. JIRA: http://jira.grails.org/browse/GRAILS-6766
To fix this I've written the following code to manually handle the numerical input on the POGO class Foo located in src/groovy
void setPrice(String priceStr)
{
this.priceString = priceStr
// Remove $ and ,
priceStr = priceStr.trim().replaceAll(java.util.regex.Matcher.quoteReplacement('$'),'').replaceAll(',','')
if (!priceStr.isDouble()) {
errors.reject(
'trade.price.invalidformat',
[priceString] as Object[],
'Price:[{0}] is an invalid price.')
errors.rejectValue(
'price',
'trade.price.invalidformat')
} else {
this.price = priceStr.toDouble();
}
}
The following throws a null reference exception on the errors.reject() line.
foo.price = "asdf" // throws null reference on errors.reject()
foo.validate()
However, I can say:
foo.validate()
foo.price = "asdf" // no Null exception
foo.hasErrors() // false
foo.validate()
foo.hasErrors() // true
Where does errors come from when validate() is called?
Is there a way to add the errors property without calling validate() first?
I can't exactly tell you why, but you need to call getErrors() explicitly instead of accessing it as errors like a property. For some reason, Groovy isn't calling the method for it. So change the reject lines in setPrice() to
getErrors().reject(
'trade.price.invalidformat',
[priceString] as Object[],
'Price:[{0}] is an invalid price.')
getErrors().rejectValue(
'price',
'trade.price.invalidformat')
That is the easiest way to make sure the Errors object exists in your method. You can check out the code that adds the validation related methods to your domain class.
The AST transformation handling #Validateable augments the class with, among other things
a field named errors
public methods getErrors, setErrors, clearErrors and hasErrors
The getErrors method lazily sets the errors field if it hasn't yet been set. So it looks like what's happening is that accesses to errors within the same class are treated as field accesses rather than Java Bean property accesses, and bypassing the lazy initialization.
So the fix appears to be to use getErrors() instead of just errors.
The errors are add to your validateable classes (domain classes and classes that have the annotation #Validateable) dinamically.
Allowing the developer to set a String instead of a number doesn't seem a good way to go. Also, your validation will work only for that particular class.
I think that a better approach is to register a custom property editor for numbers. Here's a example with dates, that enable the transform of String (comming from the form) to Date with a format like dd/MM/yyyy. The idea is the same, as you will enforce that your number is parseable (eg. Integer.parseInt() will throw exception).
In your domain class, use the numeric type instead of String, so by code developers will not be allowed to store not number values.

Best practice for incorrect parameters on a remove method

So I have an abstract data type called RegionModel with a series of values (Region), each mapped to an index. It's possible to remove a number of regions by calling:
regionModel.removeRegions(index, numberOfRegionsToRemove);
My question is what's the best way to handle a call when the index is valid :
(between 0 (inclusive) and the number of Regions in the model (exclusive))
but the numberOfRegionsToRemove is invalid:
(index + regionsToRemove > the number of regions in the model)
Is it best to throw an exception like IllegalArgumentException or just to remove as many Regions as I can (all the regions from index to the end of the model)?
Sub-question: if I throw an exception what's the recommended way to unit test that the call threw the exception and left the model untouched (I'm using Java and JUnit here but I guess this isn't a Java specific question).
Typically, for structures like this, you have a remove method which takes an index and if that index is outside the bounds of the items in the structure, an exception is thrown.
That being said, you should be consistent with whatever that remove method that takes a single index does. If it simply ignores incorrect indexes, then ignore it if your range exceeds (or even starts before) the indexes of the items in your structure.
I agree with Mitchel and casperOne -- an Exception makes the most sense.
As far as unit testing is concerned, JUnit4 allows you to exceptions directly:
http://www.ibm.com/developerworks/java/library/j-junit4.html
You would need only to pass parameters which are guaranteed to cause the exception, and add the correct annotation (#Test(expected=IllegalArgumentException.class)) to the JUnit test method.
Edit: As Tom Martin mentioned, JUnit 4 is a decent-sized step away from JUnit 3. It is, however, possible to also test exceptions using JUnit 3. It's just not as easy.
One of the ways I've tested exceptions is by using a try/catch block within the class itself, and embedding Assert statements within it.
Here's a simple example -- it's not complete (e.g. regionModel is assumed to be instantiated), but it should get the idea across:
public void testRemoveRegionsInvalidInputs() {
int originalSize = regionModel.length();
int index = 0;
int numberOfRegionsToRemove = 1,000; // > than regionModel's current size
try {
regionModel.removeRegions(index, numberOfRegionsToRemove);
// Since the exception will immediately go into the 'catch' block this code will only run if the IllegalArgumentException wasn't thrown
Assert.assertTrue("Exception not Thrown!", false);
}
catch (IllegalArgumentException e) {
Assert.assertTrue("Exception thrown, but regionModel was modified", regionModel.length() == originalSize);
}
catch (Exception e) {
Assert.assertTrue("Incorrect exception thrown", false);
}
}
I would say that an argument such as illegalArgumentException would be the best way to go here. If the calling code was not passing a workable value, you wouldn't necessarily want to trust that they really wanted to remove what it had them do.

Resources