I just recently came across an issue that leaves me puzzled. I'm happy about every advice you can give, even if it's about how to get more insights (i.e. logging).
I am using spring boot 2.0.0M1 (as generated from start.spring.io) with the reactive (netty backed) org.springframework.web.reactive.function.client.WebClient.
When calling an old, non-reactive service that just returns a JSON object, the WebClient does not emit any event, even though the called service is fully responsive (compare log).
2017-05-29 17:33:30,016 | reactor-http-nio-2 | INFO | | onSubscribe([Fuseable] FluxOnAssembly.OnAssemblySubscriber) | myLogClass:145 |
2017-05-29 17:33:30,016 | reactor-http-nio-2 | INFO | | request(unbounded) | myLogClass:145 |
2017-05-29 17:33:30,016 | reactor-http-nio-2 | DEBUG | | onSubscribe([Fuseable] FluxOnAssembly.OnAssemblySubscriber) | client:125 |
2017-05-29 17:33:30,016 | reactor-http-nio-2 | DEBUG | | request(unbounded) | client:125 |
When debugging this issue, I figured something strange that leaves me thinking that I'm simply using this implementation in a wrong way: I saw an URL from another service (that I called earlier) in nettys pool/channel handling (will add the class I found it in later).
Other observations:
when I leave the other WebClient call out of the picture, the problematic call works
the called services are behind a gateway, so they (might) have the same IP but different URI
So my seemingly unlikely ideas so far:
netty channel/pool handling somehow screws my urls up
WebClient shouldn't be used with different URIs
there is a f'up with URI/IP mapping on the way back from netty to the subscribers
As I said in the beginning: I am thankful for every help and of course I will add any information you request to reproduce this bugger.
Any pointer on how to maybe write a test for this is welcome as well!
Starting with information: this is how I use the WebClient as a minimum sample (I know - I am not planning on using it in a blocking way..)
return WebClient.create(serviceUri)
.put()
.uri("/mypath/" + id)
.body(fromObject(request))
.accept(APPLICATION_JSON)
.header(HttpHeaders.CONTENT_TYPE, APPLICATION_JSON_VALUE)
.header(HttpHeaders.COOKIE, createCookie())
.retrieve()
.bodyToMono(Response.class)
.log("com.mypackage.myLogClass", Level.ALL)
.block();
Related
Is it possible to store the configuration information for the states, actions, and transitions of the spring-state-machine in a database? The idea is to load that configuration data at startup and create the state machine using that data. This way, we can modify the states, actions, and transitions at any time and reload the application to modify the state machine graph.
Secondly, I am a bit confused about the persist functionality that the spring-state-machine offers. Is it to persist the history/activity log information in terms of which actions were executed by which user that results in some state transitions? Or is it some internal state of the state machine that helps reload it. If I wanted such activity log available in the database, do the spring-state-machine framework provides the capabilities to store that data?
From the article on Medium it says to configure the state machine
#Override
public void configure(StateMachineTransitionConfigurer<States, Events> transitions)
throws Exception {
transitions
.withExternal()
.source(States.ORDERED)
.target(States.ASSEMBLED)
.event(Events.assemble)
.and()
.withExternal()
.source(States.ASSEMBLED)
.target(States.DELIVERED)
.event(Events.deliver)
.and()
.withExternal()
.source(States.DELIVERED)
.target(States.INVOICED)
.event(Events.release_invoice)
So what I think is that if you have a table called tbl_transitions where you have columns
id | from_state | to_state | event
-----------------------------------
1 | ORDERED | ASSEMBELED | assemble
2 | ASSEMBLED | DELIVERED | deliver
3 | DELIVERED | INVOICED | release_invoice
You could read data from this table, loop over it and build your transitions in a "non-fluent" way. I have not given this a try but it is a thought.
I am having some difficulty deciding between 2x approaches to managing the rejection of messages on an MQ client. Admittedly, it's more an ideological argument than a technical one.
Consider this: a message (XML) on a queue is read by a client. The client checks the digital signature (and, by extension, whether the message adheres to a certain schema), before further processing. Let's say the verification of the digital signature fails though. I don't want the message to be further processed. It needs to go back to source and be sorted out 'by hand'.
As far as I can see, there are 2x approaches I could take:
Option 1
Client reads message
Client acknowledges receipt
Client discovers message is somehow invalid
Client writes invalid message onto 'reject' queue
CLIENT MQ CLIENT
READ +-------+ +----+
OUT Q | --- | --------> |PROCESS| -----> |NEXT|
| --- | |MESSAGE| |STEP|
+-----+ +-------+ +----+
|
|
REJECT Q | --- | <-------------+
| --- | FAILURE
+-----+
Option 2
Client reads message
Client discovers message is somehow invalid
Client does not acknowledge receipt of message
MRRTY = 0 (?) so QM writes message onto reject Q
CLIENT MQ CLIENT
READ +-------+ +----+
OUT Q | --- | --------> |PROCESS| -----> |NEXT|
| --- | <-------- |MESSAGE| |STEP|
+-----+ FAILURE +-------+ +----+
|
|
V
REJECT Q | --- |
| --- |
+-----+
I'm biased towards Option 2, where the QM is responsible for writing failed messaged onto a reject queue, as it seems to me to be a neater solution. This would also mean that the comms to the client is in one direction only. I understand the CLIENT_ACKNOWLEDGE is for the receipt of all messages up to point of acknowledgement: Am I misguided in thinking that ACKing per-message would be the mechanism that would allow me to have the QM write failed messaged onto the rejected Q per MRRTY parameter?
Any opinion / discussion re standard patterns / architecture much appreciated.
Thanks to both Morag and Attila for their help and input.
What it came down to was essentially this:
The application should handle the application errors, and a malformed message is an application error. The queue manager should only handle transport errors. (Attila)
and this...
There is no mechanism for having the queue manager route a failed message to a side queue. It is the responsibility of the application. (Morag)
So in the case of application errors the client itself will be expected to write failed / malformed messages back onto a separate queue out-of-band.
I'm using spring-xd 1.3.0 release to process a tuple message.
After using some taps to enrich a message, I've made my own aggregator to re-assemblate the resulting messages.
Now I would like to use a postgreSql message store, to have persistence in case of crashing node.
So I roughly copy-pasted the original xml configuration file of the original spring-xd aggregator.
Then I built and deployed the following stream:
stream create aggregate --definition "queue:scoring > scoring-aggregator --store=jdbc --username=${spring.datasource.username} --password=${spring.datasource.password} --driverClassName=${spring.datasource.driverClassName} --url=${spring.datasource.url} --initializeDatabase=false > queue:endAggr"
but when I send my usual tuple message to this stream, that got correctly processed by an in-memory store, I get:
xd_container_2 | Caused by: org.springframework.core.serializer.support.SerializationFailedException: Failed to serialize object using DefaultSerializer; nested exception is java.io.NotSerializableException: org.springframework.xd.tuple.DefaultTuple
xd_container_2 | at org.springframework.core.serializer.support.SerializingConverter.convert(SerializingConverter.java:68) ~[spring-core-4.2.2.RELEASE.jar:4.2.2.RELEASE]
xd_container_2 | at org.springframework.integration.jdbc.JdbcMessageStore.addMessage(JdbcMessageStore.java:345) ~[spring-integration-jdbc-4.2.4.RELEASE.jar:na]
xd_container_2 | at org.springframework.integration.jdbc.JdbcMessageStore.addMessageToGroup(JdbcMessageStore.java:386) ~[spring-integration-jdbc-4.2.4.RELEASE.jar:na]
xd_container_2 | at org.springframework.integration.aggregator.AbstractCorrelatingMessageHandler.store(AbstractCorrelatingMessageHandler.java:604) ~[spring-integration-core-4.2.2.RELEASE.jar:na]
xd_container_2 | at org.springframework.integration.aggregator.AbstractCorrelatingMessageHandler.handleMessageInternal(AbstractCorrelatingMessageHandler.java:400) ~[spring-integration-core-4.2.2.RELEASE.jar:na]
xd_container_2 | at org.springframework.integration.handler.AbstractMessageHandler.handleMessage(AbstractMessageHandler.java:127) ~[spring-integration-core-4.2.2.RELEASE.jar:na]
and now... well I'm stuck and dont have any idea how to proceed.
Any hint appreciated.
The tuple is not Serializable - I am not sure why - but XD uses Kryo for serialization internally (by default); you could add kryo transformers before/after the aggregator as a work around.
EDIT
Another option is to convert the Tuple to json, using --inputType=application/json on the aggregator.
See type conversion.
The aggregator output would be a collection of JSON strings, getting them back to Tuple would depend on what you are doing downstream of the aggregator.
I want to use Cucumber to test my application which takes snapshots of external websites and logs changes.
I already tested my models separatly using RSpec and now want to make integration tests with Cucumber.
For mocking the website requests I use VCR.
My tests usually follow a similar pattern:
1. Given I have a certain website content (I do this using VCR cassettes)
2. When I take a snapshot of the website
3. Then there should be 1 "new"-snapshot and 1 "new"-log messages
Depending if the content of the website changes, a "new"-snapshot should be created and a "new"-log message should be created.
If the content stays the same, only a "old"-log message should be created.
This means, that the the application's behaviour depends on the current existing snapshots.
This is why I would like to run the different scenarios without resetting the DB after each row.
Scenario Outline: new, new, same, same, new
Given website with state <website_state_1>
When I take a snapshot
Then there should be <1> "new"-snapshot and <1> "old"-log messages and <1> "new"-log messages
Examples:
| state | snapshot_new | logmessages_old | logmessages_new |
| VCR_1 | 1 | 0 | 1 |
| VCR_2 | 2 | 0 | 2 |
| VCR_3 | 2 | 1 | 2 |
| VCR_4 | 2 | 2 | 2 |
| VCR_5 | 3 | 2 | 3 |
However, the DB is resetted after each scenario is run.
And I think that scenario outline was never intended to be used like this. Scenarios should be independent from each other, right?
Am I doing something wrong trying to solve my problem in this way?
Can/should scenario outline be used for that or is there another elegant way to do this?
J.
Each line in the Scenario Outline Examples table should be considered one individual Scenario. Scenarios should be independent from each other.
If you need a scenario to depend on the system being in a certain state, you'll need to set that state in the Given.
I've recently read the article CQRS à la Greg Young and am still trying to get my head around CQRS.
I'm not sure about where input validation should happen, and if it possibly has to happen in two separate locations (thereby violating the Don't Repeat Yourself rule and possibly also Separation of Concerns).
Given the following application architecture:
# +--------------------+ ||
# | event store | ||
# +--------------------+ ||
# ^ | ||
# | events | ||
# | v
# +--------------------+ events +--------------------+
# | domain/ | ---------------------> | (denormalized) |
# | business objects | | query repository |
# +--------------------+ || +--------------------+
# ^ ^ ^ ^ ^ || |
# | | | | | || |
# +--------------------+ || |
# | command bus | || |
# +--------------------+ || |
# ^ |
# | +------------------+ |
# +------------ | user interface | <-----------+
# commands +------------------+ UI form data
The domain is hidden from the UI behind a command bus. That is, the UI can only send commands to the domain, but never gets to the domain objects directly.
Validation must not happen when an aggregate root is reacting to an event, but earlier.
Commands are turned into events in the domain (by the aggregate roots). This is one place where validation could happen: If a command cannot be executed, it isn't turned into a corresponding event; instead, (for example) an exception is thrown that bubbles up through the command bus, back to the UI, where it gets caught.
Problem:
If a command won't be able to execute, I would like to disable the corresponding button or menu item in the UI. But how do I know whether a command can execute before sending it off on its way? The query side won't help me here, as it doesn't contain any business logic whatsoever; and all I can do on the command side is send commands.
Possible solutions:
For any command DoX, introduce a corresponding dummy command CanDoX that won't actually do anything, but lets the domain give feedback whether command X could execute without error.
Duplicate some validation logic (that really belongs in the domain) in the UI.
Obviously the second solution isn't favorable (due to lacking separation of concerns). But is the first one really better?
I think my question has just been solved by another article, Clarified CQRS by Udi Dahan. The section "Commands and Validation" starts as follows:
Commands and Validation
In thinking through what could make a command fail, one topic that comes up is validation. Validation is
different from business rules in that it states a context-independent fact about a command. Either a
command is valid, or it isn't. Business rules on the other hand are context dependent.
[…] Even though a command may be valid, there still may be reasons to reject it.
As such, validation can be performed on the client, checking that all fields required for that command
are there, number and date ranges are OK, that kind of thing. The server would still validate all
commands that arrive, not trusting clients to do the validation.
I take this to mean that — given that I have a task-based UI, as is often suggested for CQRS to work well (commands as domain verbs) — I would only ever gray out (disable) buttons or menu items if a command cannot yet be sent off because some data required by the command is still missing, or invalid; ie. the UI reacts to the command's validness itself, and not to the command's future effect on the domain objects.
Therefore, no CanDoX commands are required, and no domain validation logic needs to be leaked into the UI. What the UI will have, however, is some logic for command validation.
Client-side validation is basically limited to format validation, because the client side cannot know the state of the data model on the server. What is valid now, may be invalid 1/2 second from now.
So, the client side should check only whether all required fields are filled in and whether they are of the correct form (an email address field must contain a valid email address, for instance of the form (.+)#(.+).(.+) or the like).
All those validations, along with business rule validations, are then performed on the domain model in the Command service. Therefore, data that was validated on the client may still result in invalidated Commands on the server. In that case, some feedback should be able to make it back to the client application... but that's another story.