Mixing xpath and simple expression in Camel - xpath

I am just wondering if it is possible to mix XPath expression with a Camel simple expression.
The case:
In my configure() method of my route I have the code:
Namespaces ns = new Namespaces("sys1", "urn:com:example:namespace:system/1");
directFrom("from")
.setExchangePattern(ExchangePattern.InOut)
.to("{{request.endpoint}}")
.choice()
.when(header(STATUS_CODE).isEqualTo(200))
.choice()
.when(xpath("count(//sys1:UsageType[#code='0003'])>0").namespaces(ns))
.setHeader(STATUS_CODE,constant(404))
.setHeader(STATUS_MESSAGE,simple("Not found"))
.setBody(constant("Not found"))
.endChoice()
.otherwise()
.to("xslt:xslt/response.xsl?transformerFactoryClass=net.sf.saxon.TransformerFactoryImpl")
.endChoice()
.end()
.endChoice()
.end();
I want the value in the XPath expression to be configurable: is it possible to have something like this in the xpath expression:
.when(xpath("count(//sys1:UsageType[#code='${properties:filter.value}'])>0").namespaces(ns)))
I use the same syntax as a simple expression.
The only way I found is to inject (with blueprint) the whole XPath (or just the value) in a variable.

From the Camel documentation:
// Following properties are given:
// foo=Camel
// bar=Kong
from("direct:in").choice()
// $type is a variable for the header with key type
// here we use the properties function to lookup foo from the properties files
// which at runtime will be evaluted to 'Camel'
.when().xpath("$type = function:properties('foo')")
.to("mock:camel")
// here we use the simple language to evaluate the expression
// which at runtime will be evaluated to 'Donkey Kong'
.when().xpath("//name = function:simple('Donkey ${properties:bar}')")
.to("mock:donkey")
.otherwise()
.to("mock:other")
.end();
Thus, something like that should work:
.when(xpath("count(//sys1:UsageType[#code=function:simple('${properties:filter.value}')])>0").namespaces(ns))

Related

Add function to GSON JsonObject

My target is to generate the JSON configuration of an apexchart ( https://apexcharts.com/javascript-chart-demos/bar-charts/custom-datalabels/ ) in Java with GSON.
The configuration contains a property "formatter" that has a JavaScript function as a value:
formatter: function (val, opt) {
return opt.w.globals.labels[opt.dataPointIndex] + ": " + val
},
When I add a property to a JsonObject like this jsonDataLabels.addProperty("formatter", "(val, opt) {...}"); then the property value in the output is (as expected) a String (with quotes) and apexchart doesn't interpret it.
How can I add an unquoted JavaScript function into a GSON JsonObject?
How can I add an unquoted JavaScript function into a GSON JsonObject?
The short answer is you can't; Gson is a JSON library and what you are creating is not JSON but a JavaScript object.
However, you might be able to achieve this with Gson's JsonWriter.jsonValue method. That method is intended to write a raw JSON value, but you could misuse it to write your JavaScript function value:
JsonWriter jsonWriter = new JsonWriter(...);
jsonWriter.beginObject();
jsonWriter.name("formatter");
jsonWriter.jsonValue("function (val, opt) { ... }");
jsonWriter.endObject();
But be careful because jsonValue does not check the syntax of the provided value, so if you provide a malformed value, the resulting JSON (or in your case JavaScript object) might be malformed.
Maybe there are JavaScript code writer libraries out there though, which would be better suited for your use case.
A workaround (not my preferred approach) is ...
to write a unique placeholder into the JSON property,
export the JSON document into a String
replace the unique placeholder within the String

Spring Cloud Function - how to pass multiple parameters to a function call in Kotlin

In the documentation for Spring Cloud Function, the examples for Kotlin consist of function that takes a single parameter, e.g.
#Bean
open fun lower(): (String) -> String = { it.lowercase() }
which is called via a URL that has the single parameter on the end as so:
http://localhost/lower/UpperCaseParam
How can more than one parameter be passed ?
Is something like this supported ?
#Bean
open fun upper(): (String,String) -> String = { x,y -> x+y }
or if not multiple parameters, an object ?
#Bean
open fun upper(): (Pair<String,String>) -> String = { it.first+it.second }
Function by definition has only a single input/output. Even if we were to add support for BiFunction that would only satisfy cases where you have two inputs etc.
The best way to achieve what you want is to use Message Headers which you can pass as HTTP headers.
Then you can make your function signature to accept Function<Message<YourPOJOType>, ...> uppercase(); and then get payload (your main argument, such as request param) and headers from Message.
You can use BiFunction where second argument would be Map representing message headers and first argument payload. This way you can deal with your types and keep your function completely free from anything Spring. BiFunction<YourPOJOType, Map, ...>

Spring Cloud Function - Separate routing-expression for different Consumer

I have a service, which receives different structured messages from different message queues. Having #StreamListener conditions we can choose at every message type how that message should be handled. As an example:
We receive two different types of messages, which have different header fields and values e.g.
Incoming from "order" queue:
Order1: { Header: {catalog:groceries} }
Order2: { Header: {catalog:tools} }
Incoming from "shipment" queue:
Shipment1: { Header: {region:Europe} }
Shipment2: { Header: {region:America} }
There is a binding for each queue, and with according #StreamListener I can process the messages by catalog and region differently
e.g.
#StreamListener(target = OrderSink.ORDER_CHANNEL, condition = "headers['catalog'] == 'groceries'")
public void onGroceriesOrder(GroceryOder order){
...
}
So the question is, how to achieve this with the new Spring Cloud Function approach?
At the documentation https://cloud.spring.io/spring-cloud-static/spring-cloud-stream/3.0.2.RELEASE/reference/html/spring-cloud-stream.html#_event_routing it is mentioned:
Also, for SpEL, the root object of the evaluation context is Message so you can do evaluation on individual headers (or message) as well …​.routing-expression=headers['type']
Is it possible to add the routing-expression to the binding like (in application.yml)
onGroceriesOrder-in-0:
destination: order
routing-expression: "headers['catalog']==groceries"
?
EDIT after first answer
If the above expression at this location is not possible, what the first answer implies, than my question goes as follows:
As far as I understand, an expression like routing-expression: headers['catalog'] must be set globally, because the result maps to certain (consumer) functions.
How can I control that the 2 different messages on each queue will be forwarted to their own consumer function, e.g.
Order1 --> MyOrderService.onGroceriesOrder()
Order2 --> MyOrderService.onToolsOrder()
Shipment1 --> MyShipmentService.onEuropeShipment()
Shipment2 --> MyShipmentService.onAmericaShipment()
That was easy with #StreamListener, because each method gets their own #StreamListener annotation with different conditions. How can this be achieved with the new routing-expression setting?
?
Aside from the fact that the above is not a valid expression, but I think you meant headers['catalog']==groceries. If so, what would you expect to happen from evaluating it as the only two option could be true/false. Anyway, these are rhetorical but helps to understand the problem and how to fix it.
The expression must result in a value of a function to route TO. So. . .
routing-expression: headers['catalog'] - assumes that the actual value of catalog header is the name of the function to invoke
routing-expression: headers['catalog']==groceries ? 'processGroceries' : 'processOther' - maps value 'groceries' to 'processGroceries' function.
For a specific routing, you can use MessageRoutingCallback strategy:
MessageRoutingCallback
The MessageRoutingCallback is a strategy to assist with determining
the name of the route-to function definition.
public interface MessageRoutingCallback {
FunctionRoutingResult routingResult(Message<?> message);
. . .
}
All you need to do is implement and register it as a bean to be picked
up by the RoutingFunction. For example:
#Bean
public MessageRoutingCallback customRouter() {
return new MessageRoutingCallback() {
#Override
FunctionRoutingResult routingResult(Message<?> message) {
return new FunctionRoutingResult((String) message.getHeaders().get("func_name"));
}
};
}
Spring Cloud Function

How to manually test input validation with NestJS and class-validator

TLNR: I was trying to test DTO validation in the controller spec instead of in e2e specs, which are precisely crafted for that. McDoniel's answer pointed me to the right direction.
I develop a NestJS entrypoint, looking like that:
#Post()
async doStuff(#Body() dto: MyDto): Promise<string> {
// some code...
}
I use class-validator so that when my API receives a request, the payload is parsed and turned into a MyDto object, and validations present as annotations in MyDto class are performed. Note that MyDto has an array of nested object of class MySubDto. With the #ValidateNested and #Type annotations, the nested objects are also validated correctly.
This works great.
Now I want to write tests for the performed validations. In my .spec file, I write:
import { validate } from 'class-validator';
// ...
it('should FAIL on invalid DTO', async () => {
const dto = {
//...
};
const errors = await validate( dto );
expect(errors.length).not.toBe(0);
}
This fails because the validated dto object is not a MyDto. I can rewrite the test as such:
it('should FAIL on invalid DTO', async () => {
const dto = new MyDto()
dto.attribute1 = 1;
dto.subDto = { 'name':'Vincent' };
const errors = await validate( dto );
expect(errors.length).not.toBe(0);
}
Validations are now properly made on the MyDto object, but not on my nested subDto object, which means I will have to instantiate aaaall objects of my Dto with according classes, which would be much inefficient. Also, instantiating classes means that TypeScript will raise errors if I voluntarily omits some required properties or indicate incorrect values.
So the question is:
How can I use NestJs built-in request body parser in my tests, so that I can write any JSON I want for dto, parse it as a MyDto object and validate it with class-validator validate function?
Any alternate better-practice ways to tests validations are welcome too!
Although, we should test how our validation DTOs work with ValidationPipe, that's a form of integration or e2e tests. Unit tests are unit tests, right?! Every unit should be testable independently.
The DTOs in Nest.js are perfectly unit-tastable. It becomes necessary to unit-test the DTOs, when they contain complex regular expressions or sanitation logic.
Creating an object of the DTO for test
The request body parser in Nest.js that you are looking for is the class-transformer package. It has a function plainToInstance() to turn your literal or JSON object into an object of the specified type. In your example the specified type is the type of your DTO:
const myDtoObject = plainToInstance(MyDto, myBodyObject)
Here, myBodyObject is your plain object that you created for test, like:
const myBodyObject = { attribute1: 1, subDto: { name: 'Vincent' } }
The plainToInstance() function also applies all the transformations that you have in your DTO. If you just want to test the transformations, you can assert after this statement. You don't have to call the validate() function to test the transformations.
Validating the object of the DTO in test
To the emulate validation of Nest.js, simply pass the myDtoObject to the validate() function of the class-validator package:
const errors = await validate(myDtoObject)
Also, if your DTO or SubDTO object is too big or too complex to create, you have the option to skip the remaining properties or subObjects like your subDto:
const errors = await validate(myDtoObject, { skipMissingProperties: true })
Now your test object could be without the subDto, like:
const myBodyObject = { attribute1: 1 }
Asserting the errors
Apart from asserting that the errors array is not empty, I also like to specify a custom error message for each validation in the DTO:
#IsPositive({ message: `Attribute1 must be a positive number.` })
readonly attribute1: number
One advantage of a custom error message is that we can write it in a user-friendly way instead of the generic messages created by the library. Another big advantage is that I can assert this error message in my tests. This way I can be sure that the errors array is not empty because it contains the error for this particular validation and not something else:
expect(stringified(errors)).toContain(`Attribute1 must be a positive number.`)
Here, stringified() is a simple utility function to convert the errors object to a JSON string, so we can search our error message in it:
export function stringified(errors: ValidationError[]): string {
return JSON.stringify(errors)
}
Your final test code
Instead of the controller.spec.ts file, create a new file specific to your DTO, like my-dto.spec.ts for unit tests of your DTO. A DTO can have plenty of unit tests and they should not be mixed with the controller's tests:
it('should fail on invalid DTO', async () => {
const myBodyObject = { attribute1: -1, subDto: { name: 'Vincent' } }
const myDtoObject = plainToInstance(MyDto, myBodyObject)
const errors = await validate(myDtoObject)
expect(errors.length).not.toBe(0)
expect(stringified(errors)).toContain(`Attribute1 must be a positive number.`)
}
Notice how you don't have to assign the values to the properties one by one for creating the myDtoObject. In most cases, the properties of your DTOs should be marked readonly. So, you can't assign the values one by one. The plainToInstance() to the rescue!
That's it! You were almost there, unit testing your DTO. Good efforts! Hope that helps now.
To test input validation with the validation pipes, I think it is agreed that the best place to do this is in e2e tests rather than in unit tests, just make sure that you remember to register your pipes (if you normally use app.useGlobalPipes() instead of using dependency injection)

Jmeter - How to keep trying the same request until it succeeds

So right now I have my http request under a while controller and I have a user defined variable Failure set to true. I would like jmeter to keep trying this request until it succeeded (without returned 500).
My while loop condition is:
${__javaScript(${Failure})}
I also tried ${Failure} as while condition but getting the same result.
And I have a JSR223 Assertion after the result tree as following:
if (ResponseCode.equals("500") == true) {
vars.put("Failure", true)
}
else {
vars.put("Failure", false)
}
When I ran this, I got into infinite loop even my request succeeded. It seems the Failure value was never updated. Any suggestion on this would be appreciated.
This is because you're trying to add a Boolean object into a function which expects a String. In order to be able to store a Boolean value into a JMeter Variable you need to use vars.putObject() function instead like:
vars.putObject("Failure", true)
or surround true with quotation marks so it would look like a String to Groovy:
vars.put("Failure", "true");
Amend your JSR223 Assertion code to look like:
if (ResponseCode.equals("500")) {
vars.put("Failure", "true")
}
else {
vars.put("Failure", "false")
}
Amend your While Controller condition to be just ${Failure}. Using JavaScript is a form of a performance anti-pattern, if you need to perform some scripting - go for Groovy. In particular your case you can just use ${Failure} variable as the condition given it can be either true or false
I finally got it working. In my JSR233 assertion, I updated it to:
if (prev.getResponseCode().equals("500")) {
vars.put("Failure", "true")
}
else {
vars.put("Failure", "false")
}
And it works now.

Resources