deriving spring state machine from database - spring-boot

Is it possible to store the configuration information for the states, actions, and transitions of the spring-state-machine in a database? The idea is to load that configuration data at startup and create the state machine using that data. This way, we can modify the states, actions, and transitions at any time and reload the application to modify the state machine graph.
Secondly, I am a bit confused about the persist functionality that the spring-state-machine offers. Is it to persist the history/activity log information in terms of which actions were executed by which user that results in some state transitions? Or is it some internal state of the state machine that helps reload it. If I wanted such activity log available in the database, do the spring-state-machine framework provides the capabilities to store that data?

From the article on Medium it says to configure the state machine
#Override
public void configure(StateMachineTransitionConfigurer<States, Events> transitions)
throws Exception {
transitions
.withExternal()
.source(States.ORDERED)
.target(States.ASSEMBLED)
.event(Events.assemble)
.and()
.withExternal()
.source(States.ASSEMBLED)
.target(States.DELIVERED)
.event(Events.deliver)
.and()
.withExternal()
.source(States.DELIVERED)
.target(States.INVOICED)
.event(Events.release_invoice)
So what I think is that if you have a table called tbl_transitions where you have columns
id | from_state | to_state | event
-----------------------------------
1 | ORDERED | ASSEMBELED | assemble
2 | ASSEMBLED | DELIVERED | deliver
3 | DELIVERED | INVOICED | release_invoice
You could read data from this table, loop over it and build your transitions in a "non-fluent" way. I have not given this a try but it is a thought.

Related

Invoke a command on a node in Karaf Cellar based on NodeID

At the moment, I have a cellar setup with just two nodes (meant for testing); as seen in the dump below:
| Id | Alias | Host Name | Port
--+-------------------+----------------+--------------+-----
x | 192.168.99.1:5702 | localhost:8182 | 192.168.99.1 | 5702
| 192.168.99.1:5701 | localhost:8181 | 192.168.99.1 | 5701
Edit 1 -- Additional Information about the setup (begin):
I have multiple cellar nodes. I am trying to make one node as a master, which is supposed to expose a management web panel via which I would like to fetch stats from all the other nodes. For this purpose, I have exposed my custom implementations of Mbeans involving my business logics. I understand that these mbeans can be invoked using Jolokia, and I am already doing that. So, that means, all these different nodes will have Jolokia installed, while the master node will have Hawtio installed (such that I can connect to slave nodes via Jolokia API through hawtio panel).
Right now, I am manually assigning the alias for every node (which refers to the web endpoint that it exposes via pax.web configuration). This is just a workaround to simplify my testing procedures.
Desired Process:
I have access to the ClusterManager service via service registry. Thus, I am able to invoke clusterManager.listNodes() and loop through the result in my MBean. While looping through this, all I get is the basic node info. But, if it is possible, I would like to parse the etc/org.ops4j.pax.web.cfg file from every node and get the port number (or the value of the property org.osgi.service.http.port).
While retrieving the list of nodes, I would like to get a response as:
{
"Node 1": {
"hostname": "192.168.0.100",
"port": 5701,
"webPort": "8181",
"alias": "Data-Node-A"
"id": "192.168.0.100:5701"
},
"Node 2": {
"hostname": "192.168.0.100",
"port": 5702,
"webPort": "8182",
"alias": "Data-Node-B",
"id": "192.168.0.100:5702"
}
}
Edit 1 (end):
I am trying to find a way to execute specific commands on a particular node. For example, I want to execute a command on Node *:5702 from *:5701 such that *:5702 returns the properties and values of a local configuration file.
My current method is not optimal, as I am setting the alias(the web endpoint for jolokia) of a node manually, and based on that I am retrieving my desired info via my custom mbean. I guess, this is not the best practice.
So far, I have:
Set<Node> nodes = clusterManager.listNodes();
thus, if I loop through this set of nodes, I would like to retrieve config settings from local configuration file from every node based on the node ID.
Do I need to implement something specific to dosgi here?
Or would it be something similar to the sample code of ping-pong (https://github.com/apache/karaf-cellar/tree/master/utils/src/main/java/org/apache/karaf/cellar/utils/ping) from apache-cellar project?
Any input on this would be very helpful.
P.S. I tried posting this in Karaf mailing list, but my posts are getting bounced.
Regards,
Cooshal.

Want to execute Background only once in Cucumber feature files for multiple scenarios

I Want to execute Background only once in each cucumber feature files for multiple scenarios. how can i do that in step files?
Feature: User can verify...........
Background:
Given Enter test data for a specific logic
Scenario: Verify ......... 1
When A1
And B1
Then C1
Scenario: Verify ......... 2
When A2
And B2
Then C2
Scenario: Verify ......... 2
When A3
And B3
Then C3
Tests should be isolated. That is the way Cucumber is designed, and there's a very good reason for it. I would strongly urge against it unless you absolutely have to.
If you have something that needs to be executed before your entire test suite, consider a #BeforeAll hook.
That said, I've come across this before. We had tests which would start a process that took a long time (such as provisioning a VM, which would take 10 minutes a time..), but could be skipped in other tests if it had already been done.
So you may want to design steps along the lines of..
"Given that .. X has been done"
and in the step detect whether X is done or not, and if not, then do X. If it has, then skip it. For example, say creating users is a process that takes absolutely ages. Then we could do..
Given that user "Joe Bloggs" has been created
The step definition would first try to determine if Joe existed, and then if they didn't, create the user. You have an initial start up problem, but then other tests will be able to safely assume Joe exists.
WHY YOU SHOULD NOT DO THIS
If you do this, there's a high probability your tests will conflict with one another.
Say you have lots of tests that use the Joe Bloggs user. Maybe some will delete him from the system. Some might temporarily deactivate the user, add roles, change their name.. all sorts of things. All tests assume certain things about the system they're testing, and you are intentionally harming your tests assumptions about the environment.
It's particularly bad if you are running parallel tests. Maybe your system has a restriction that only one person can log in at a time as Joe. Or every test is changing loads of things about Joe, and no test can assume anything about the state of the Joe user. Then you'll be in a huge mess.
The best solution is often to create brand new data for each test you run. Open up those APIs and create disposable data for each test run. Here's a good blog post about it: https://opencredo.com/test-automation-concepts-data-aliases/
Background is designed to run each time before each scenario. Its not good and standard as well to hack Background.
If you want your background to be run only once. You can add condition with an instance variable ex, i==0 then execute the logic and increment i at the end of the method.
For the next scenario, i value is 1 which is not equal to 0,the it won't execute the logic.
We can have both scenario and scenario outline in single feature file. With this scenario will run only once and scenario outline would run based on the data given in Example table
We had similar problem and couldn't find any solution for background for multiple scenario. Background is design to run for all scenario after every scenario it will run Background. If you have examples in scenario in this case it will run after each examples.
We had to option to fix this problem.
1) Used #BeforeClass Annotation of junit
2) Create setup scenario and It will always execute in first place.
For example : In API testing, You take login once and used that session every time to access other API
Feature:Setup Data
Given Customer logs in as System Admin
Scenario: Verify ......... 1 When A1 And B1 Then C1
Scenario: Verify ......... 2 When A2 And B2 Then C2
Scenario: Verify ......... 2 When A3 And B3 Then C3
After first scenario it will execute all scenario and You don't need to use background.
I would say using background every time it must be part of business requirement else it will create unwanted test data and load on testing environment and may result in slow down testing execution time.
Please let me know if you find other solution.
Running some steps (Background) before each set of scenarios or a Scenario Outline can also be achieved by creating a tagged #Before method and passing a Scenario object as parameter. Within the before method, execute your logic only if the scenario name is different as compared to the last scenario.
Below is a how you can do it:
Feature:Setup Data Given Customer logs in as System Admin
#BeforeMethodName
Scenario Outline: Verify ......... 1
When <Variable1> And <Variable2>
Then <Variable3>
Examples:
| Variable1 | Variable2 | Variable3 |
| A1 | B1 | C1 |
| A2 | B2 | C2 |
| A3 | B3 | C3 |
| A4 | B4 | C4 |
#BeforeMethodName
Scenario Outline: Verify ......... 2
When <Variable1> And <Variable2>
Then <Variable3>
Examples:
| Variable1 | Variable2 | Variable3 |
| X1 | Y1 | Z1 |
| X2 | Y2 | Z2 |
| X3 | Y3 | Z3 |
| X4 | Y4 | Z4 |
And define the #BeforeMethodName as below:
private static String scenarioName = null;
public className BeforeMethodName(Scenario scene) {
if(!scene.getName().equals(scenarioName)) {
// Implement your logic
scenarioName = scene.getName()
}
return this;
}
This way BeforeMethodName will be called before each scenario but will execute the logic only once per Scenario Outline.
Old issue, but adding incase others find this.
As has been mentioned, cucumber should only be used to structure your code.
You can use tagged hooks to create items which are used on a sub-set of tests. Furthermore you could isolate the code into Helpers, and then conditionally call these helpers inside your ruby steps.
Probably need a bit more clarity to make a judgement

How to run dependent cucumber scenarios with scenario outline

I want to use Cucumber to test my application which takes snapshots of external websites and logs changes.
I already tested my models separatly using RSpec and now want to make integration tests with Cucumber.
For mocking the website requests I use VCR.
My tests usually follow a similar pattern:
1. Given I have a certain website content (I do this using VCR cassettes)
2. When I take a snapshot of the website
3. Then there should be 1 "new"-snapshot and 1 "new"-log messages
Depending if the content of the website changes, a "new"-snapshot should be created and a "new"-log message should be created.
If the content stays the same, only a "old"-log message should be created.
This means, that the the application's behaviour depends on the current existing snapshots.
This is why I would like to run the different scenarios without resetting the DB after each row.
Scenario Outline: new, new, same, same, new
Given website with state <website_state_1>
When I take a snapshot
Then there should be <1> "new"-snapshot and <1> "old"-log messages and <1> "new"-log messages
Examples:
| state | snapshot_new | logmessages_old | logmessages_new |
| VCR_1 | 1 | 0 | 1 |
| VCR_2 | 2 | 0 | 2 |
| VCR_3 | 2 | 1 | 2 |
| VCR_4 | 2 | 2 | 2 |
| VCR_5 | 3 | 2 | 3 |
However, the DB is resetted after each scenario is run.
And I think that scenario outline was never intended to be used like this. Scenarios should be independent from each other, right?
Am I doing something wrong trying to solve my problem in this way?
Can/should scenario outline be used for that or is there another elegant way to do this?
J.
Each line in the Scenario Outline Examples table should be considered one individual Scenario. Scenarios should be independent from each other.
If you need a scenario to depend on the system being in a certain state, you'll need to set that state in the Given.

Does validation in CQRS have to occur separately once in the UI, and once in the business domain?

I've recently read the article CQRS à la Greg Young and am still trying to get my head around CQRS.
I'm not sure about where input validation should happen, and if it possibly has to happen in two separate locations (thereby violating the Don't Repeat Yourself rule and possibly also Separation of Concerns).
Given the following application architecture:
# +--------------------+ ||
# | event store | ||
# +--------------------+ ||
# ^ | ||
# | events | ||
# | v
# +--------------------+ events +--------------------+
# | domain/ | ---------------------> | (denormalized) |
# | business objects | | query repository |
# +--------------------+ || +--------------------+
# ^ ^ ^ ^ ^ || |
# | | | | | || |
# +--------------------+ || |
# | command bus | || |
# +--------------------+ || |
# ^ |
# | +------------------+ |
# +------------ | user interface | <-----------+
# commands +------------------+ UI form data
The domain is hidden from the UI behind a command bus. That is, the UI can only send commands to the domain, but never gets to the domain objects directly.
Validation must not happen when an aggregate root is reacting to an event, but earlier.
Commands are turned into events in the domain (by the aggregate roots). This is one place where validation could happen: If a command cannot be executed, it isn't turned into a corresponding event; instead, (for example) an exception is thrown that bubbles up through the command bus, back to the UI, where it gets caught.
Problem:
If a command won't be able to execute, I would like to disable the corresponding button or menu item in the UI. But how do I know whether a command can execute before sending it off on its way? The query side won't help me here, as it doesn't contain any business logic whatsoever; and all I can do on the command side is send commands.
Possible solutions:
For any command DoX, introduce a corresponding dummy command CanDoX that won't actually do anything, but lets the domain give feedback whether command X could execute without error.
Duplicate some validation logic (that really belongs in the domain) in the UI.
Obviously the second solution isn't favorable (due to lacking separation of concerns). But is the first one really better?
I think my question has just been solved by another article, Clarified CQRS by Udi Dahan. The section "Commands and Validation" starts as follows:
Commands and Validation
In thinking through what could make a command fail, one topic that comes up is validation. Validation is
different from business rules in that it states a context-independent fact about a command. Either a
command is valid, or it isn't. Business rules on the other hand are context dependent.
[…] Even though a command may be valid, there still may be reasons to reject it.
As such, validation can be performed on the client, checking that all fields required for that command
are there, number and date ranges are OK, that kind of thing. The server would still validate all
commands that arrive, not trusting clients to do the validation.
I take this to mean that — given that I have a task-based UI, as is often suggested for CQRS to work well (commands as domain verbs) — I would only ever gray out (disable) buttons or menu items if a command cannot yet be sent off because some data required by the command is still missing, or invalid; ie. the UI reacts to the command's validness itself, and not to the command's future effect on the domain objects.
Therefore, no CanDoX commands are required, and no domain validation logic needs to be leaked into the UI. What the UI will have, however, is some logic for command validation.
Client-side validation is basically limited to format validation, because the client side cannot know the state of the data model on the server. What is valid now, may be invalid 1/2 second from now.
So, the client side should check only whether all required fields are filled in and whether they are of the correct form (an email address field must contain a valid email address, for instance of the form (.+)#(.+).(.+) or the like).
All those validations, along with business rule validations, are then performed on the domain model in the Command service. Therefore, data that was validated on the client may still result in invalidated Commands on the server. In that case, some feedback should be able to make it back to the client application... but that's another story.

Cucumber & test data management for non-Ruby apps

I'm testing an app that's basically a message-handling application - a message comes in, it's content is analysed, then it's sent somewhere else for processing. The app isn't built in Ruby.
As you might imagine, the main testing approah consists of generating a whole bunch of different types of (quite complex) messages, loading them into the app, waiting a few seconds then ensuring that they get sent to the correct place.
Functionally, the testing's going well, but I've currently got all of the test messages & desired message destinations defined in Ruby code - I'd like to move them to either a YAML file, or (second choice) a database. I'd prefer to use a YAML file over a database because it's easier to version control, and for non-technical testers to edit the message content directly.
Is there a "recommended" way to implement this sort of data management in Cucumber? It sort of smells like a fixtures approach makes sense, but fixtures to me have always involved populating a database from a file and then using the DB for testing, and I'm not 100% sure this is the best/simplest fit for this particular problem.
I believe what you will be most happy with is a Scenario Outline. You could perhaps create a yaml file an load it from a step, but that would not make a very useful test output. What you (I think) would really like is to see each message and its destination sorted by weather it passed or failed. The example below is for Failed Logins, but it gets the point accross.
Scenario Outline: Failed Login
Given I am not authenticated
When I go to "/login"
And I fill in "login" with "<mail>"
And I fill in "password" with "<password>"
And I press "Log In"
Then the login request should fail
Then I should see an error message
Examples:
| mail | password |
| not_an_address | nil |
| not#not | 123455 |
| 123#abc.com | wrong_paasword |
Each Example will turn green, red or yellow depending on whether it worked, failed or was pending.

Resources