Print Fixed Lines with Spring Shell - spring

I want to develop a small Sprint Shell project where I display static information which potentially updates when an event is received. Is this functionality possible with Spring Shell, or is it even the correct tool for the job?
To give an example of what I want to achieve:
-------------------
| Stock Value: 5$ | < This information should be always displayed
-------------------
shell:> I can put input here
While I type an event in the application should be able to change it to e.g.
-------------------
| Stock Value: 7$ | < This information gets updated
-------------------
shell:> I can still type here

Spring shell can do that for you.
See the tutorial here: https://github.com/spring-projects/spring-shell/blob/master/spring-shell-docs/src/main/asciidoc/using-spring-shell.adoc

Related

How can I execute a specific feature file before the execution of remaining feature files in goDog?

I have a few data setup before implementing the remaining test cases. I have grouped all the data setup required to be executed before the execution of test cases in a single feature file.
How can I make sure that this data setup feature file is executed before executing any other feature file in goDog framework?
As I understand your question, you're looking for a way to run some setup instructions prior to running your feature/scenario's. The problem is that scenario's and features are, by design, isolated. The way to ensure that something is executed prior to the scenario running is by defining a Background section. AFAIK you can't apply the same background across features. Scenario's are grouped per feature, and each feature can specify a Background that is executed before each scenario. I'd just copy-paste your setup stuff everywhere you need it:
Background:
Given I have the base data:
| User | Status | other fields |
| Foo | Active | ... |
| Bar | Disabled | ... |
If there's a ton of steps involved in your setup, you can define a single step that you expand to run all the "background" steps like so:
Scenario: test something
Given my test setup runs
Then implement the my test setup runs like so:
s.Step(`^my test setup runs$`, func() godog.Steps {
return godog.Steps{
"user test data is loaded",
"other things are set up",
"additional data is updated",
"update existing records",
"setup was successful",
}
})
That ought to work.
Of course, to avoid having to start each scenario with that Given my test setup runs, you can just start each feature file with:
Background:
Given my test setup runs
That will ensure the setup is performed before each scenario. The upshot will be: 2 additional lines at the start of each feature file, and you're all set to go.

Setting a specific time interval in Splunk dashboard to get results from

I am currently trying to set up a splunk search query in a dashboard that checks a specific time interval. The job I am trying to set it up with runs three times a day. Once at 6am, once at 12:20pm, and once at 16:20(4:20pm). Currently the query just searches for the latest time and sets the background as to whether it received an error or not, but the users wanted the three times it runs per day to be displayed seperately so now I need to set up an interval of time for each of the three panels to display and I have tried a lot of things with no luck(I am new to splunk so I have been just randomly trying different syntax).
I have tried using a search command |search Time>6:00:00 Time<7:00:00 and also tried using other commands that happen before the stats command that gets the latest time with no luck and I'm just stuck at this point and have no clue what to try.
I have my index at the top here but don't think its necessary to show.
| rex field=_raw ".+EVENT:\s(?\S+)\s.+STATUS:\s(?\S+)\s.+JOB:\s(?\S+)"
| stats latest(_time) as Time by status
| eval Time=strftime(Time, "%H:%M:%S") stats Time>6:00:00 Time<7:00:00
| sort 1 - Time
| table Time status
| append [| makeresults | eval Time="06:10:00"]
| eval range = case(status="FAILURE", "severe", status="SUCCESS", "low", 1==1, "guarded")
| head 1
i was having the same issue as you (where as the solutions on here and elsewhere were not working), however this below ended up working for me:
(adding this, to properly format / strip the hours)
| eval date_hour=strftime(_time, "%H")
and my full working search (each day, between 6am and 11pm , prior 25 days):
index=mymts earliest=-25d | eval date_hour=strftime(_time, "%H") | search date_hour>=6 date_hour<=23 host="172.17.172.1" "/netmap/*"

Want to execute Background only once in Cucumber feature files for multiple scenarios

I Want to execute Background only once in each cucumber feature files for multiple scenarios. how can i do that in step files?
Feature: User can verify...........
Background:
Given Enter test data for a specific logic
Scenario: Verify ......... 1
When A1
And B1
Then C1
Scenario: Verify ......... 2
When A2
And B2
Then C2
Scenario: Verify ......... 2
When A3
And B3
Then C3
Tests should be isolated. That is the way Cucumber is designed, and there's a very good reason for it. I would strongly urge against it unless you absolutely have to.
If you have something that needs to be executed before your entire test suite, consider a #BeforeAll hook.
That said, I've come across this before. We had tests which would start a process that took a long time (such as provisioning a VM, which would take 10 minutes a time..), but could be skipped in other tests if it had already been done.
So you may want to design steps along the lines of..
"Given that .. X has been done"
and in the step detect whether X is done or not, and if not, then do X. If it has, then skip it. For example, say creating users is a process that takes absolutely ages. Then we could do..
Given that user "Joe Bloggs" has been created
The step definition would first try to determine if Joe existed, and then if they didn't, create the user. You have an initial start up problem, but then other tests will be able to safely assume Joe exists.
WHY YOU SHOULD NOT DO THIS
If you do this, there's a high probability your tests will conflict with one another.
Say you have lots of tests that use the Joe Bloggs user. Maybe some will delete him from the system. Some might temporarily deactivate the user, add roles, change their name.. all sorts of things. All tests assume certain things about the system they're testing, and you are intentionally harming your tests assumptions about the environment.
It's particularly bad if you are running parallel tests. Maybe your system has a restriction that only one person can log in at a time as Joe. Or every test is changing loads of things about Joe, and no test can assume anything about the state of the Joe user. Then you'll be in a huge mess.
The best solution is often to create brand new data for each test you run. Open up those APIs and create disposable data for each test run. Here's a good blog post about it: https://opencredo.com/test-automation-concepts-data-aliases/
Background is designed to run each time before each scenario. Its not good and standard as well to hack Background.
If you want your background to be run only once. You can add condition with an instance variable ex, i==0 then execute the logic and increment i at the end of the method.
For the next scenario, i value is 1 which is not equal to 0,the it won't execute the logic.
We can have both scenario and scenario outline in single feature file. With this scenario will run only once and scenario outline would run based on the data given in Example table
We had similar problem and couldn't find any solution for background for multiple scenario. Background is design to run for all scenario after every scenario it will run Background. If you have examples in scenario in this case it will run after each examples.
We had to option to fix this problem.
1) Used #BeforeClass Annotation of junit
2) Create setup scenario and It will always execute in first place.
For example : In API testing, You take login once and used that session every time to access other API
Feature:Setup Data
Given Customer logs in as System Admin
Scenario: Verify ......... 1 When A1 And B1 Then C1
Scenario: Verify ......... 2 When A2 And B2 Then C2
Scenario: Verify ......... 2 When A3 And B3 Then C3
After first scenario it will execute all scenario and You don't need to use background.
I would say using background every time it must be part of business requirement else it will create unwanted test data and load on testing environment and may result in slow down testing execution time.
Please let me know if you find other solution.
Running some steps (Background) before each set of scenarios or a Scenario Outline can also be achieved by creating a tagged #Before method and passing a Scenario object as parameter. Within the before method, execute your logic only if the scenario name is different as compared to the last scenario.
Below is a how you can do it:
Feature:Setup Data Given Customer logs in as System Admin
#BeforeMethodName
Scenario Outline: Verify ......... 1
When <Variable1> And <Variable2>
Then <Variable3>
Examples:
| Variable1 | Variable2 | Variable3 |
| A1 | B1 | C1 |
| A2 | B2 | C2 |
| A3 | B3 | C3 |
| A4 | B4 | C4 |
#BeforeMethodName
Scenario Outline: Verify ......... 2
When <Variable1> And <Variable2>
Then <Variable3>
Examples:
| Variable1 | Variable2 | Variable3 |
| X1 | Y1 | Z1 |
| X2 | Y2 | Z2 |
| X3 | Y3 | Z3 |
| X4 | Y4 | Z4 |
And define the #BeforeMethodName as below:
private static String scenarioName = null;
public className BeforeMethodName(Scenario scene) {
if(!scene.getName().equals(scenarioName)) {
// Implement your logic
scenarioName = scene.getName()
}
return this;
}
This way BeforeMethodName will be called before each scenario but will execute the logic only once per Scenario Outline.
Old issue, but adding incase others find this.
As has been mentioned, cucumber should only be used to structure your code.
You can use tagged hooks to create items which are used on a sub-set of tests. Furthermore you could isolate the code into Helpers, and then conditionally call these helpers inside your ruby steps.
Probably need a bit more clarity to make a judgement

Does validation in CQRS have to occur separately once in the UI, and once in the business domain?

I've recently read the article CQRS à la Greg Young and am still trying to get my head around CQRS.
I'm not sure about where input validation should happen, and if it possibly has to happen in two separate locations (thereby violating the Don't Repeat Yourself rule and possibly also Separation of Concerns).
Given the following application architecture:
# +--------------------+ ||
# | event store | ||
# +--------------------+ ||
# ^ | ||
# | events | ||
# | v
# +--------------------+ events +--------------------+
# | domain/ | ---------------------> | (denormalized) |
# | business objects | | query repository |
# +--------------------+ || +--------------------+
# ^ ^ ^ ^ ^ || |
# | | | | | || |
# +--------------------+ || |
# | command bus | || |
# +--------------------+ || |
# ^ |
# | +------------------+ |
# +------------ | user interface | <-----------+
# commands +------------------+ UI form data
The domain is hidden from the UI behind a command bus. That is, the UI can only send commands to the domain, but never gets to the domain objects directly.
Validation must not happen when an aggregate root is reacting to an event, but earlier.
Commands are turned into events in the domain (by the aggregate roots). This is one place where validation could happen: If a command cannot be executed, it isn't turned into a corresponding event; instead, (for example) an exception is thrown that bubbles up through the command bus, back to the UI, where it gets caught.
Problem:
If a command won't be able to execute, I would like to disable the corresponding button or menu item in the UI. But how do I know whether a command can execute before sending it off on its way? The query side won't help me here, as it doesn't contain any business logic whatsoever; and all I can do on the command side is send commands.
Possible solutions:
For any command DoX, introduce a corresponding dummy command CanDoX that won't actually do anything, but lets the domain give feedback whether command X could execute without error.
Duplicate some validation logic (that really belongs in the domain) in the UI.
Obviously the second solution isn't favorable (due to lacking separation of concerns). But is the first one really better?
I think my question has just been solved by another article, Clarified CQRS by Udi Dahan. The section "Commands and Validation" starts as follows:
Commands and Validation
In thinking through what could make a command fail, one topic that comes up is validation. Validation is
different from business rules in that it states a context-independent fact about a command. Either a
command is valid, or it isn't. Business rules on the other hand are context dependent.
[…] Even though a command may be valid, there still may be reasons to reject it.
As such, validation can be performed on the client, checking that all fields required for that command
are there, number and date ranges are OK, that kind of thing. The server would still validate all
commands that arrive, not trusting clients to do the validation.
I take this to mean that — given that I have a task-based UI, as is often suggested for CQRS to work well (commands as domain verbs) — I would only ever gray out (disable) buttons or menu items if a command cannot yet be sent off because some data required by the command is still missing, or invalid; ie. the UI reacts to the command's validness itself, and not to the command's future effect on the domain objects.
Therefore, no CanDoX commands are required, and no domain validation logic needs to be leaked into the UI. What the UI will have, however, is some logic for command validation.
Client-side validation is basically limited to format validation, because the client side cannot know the state of the data model on the server. What is valid now, may be invalid 1/2 second from now.
So, the client side should check only whether all required fields are filled in and whether they are of the correct form (an email address field must contain a valid email address, for instance of the form (.+)#(.+).(.+) or the like).
All those validations, along with business rule validations, are then performed on the domain model in the Command service. Therefore, data that was validated on the client may still result in invalidated Commands on the server. In that case, some feedback should be able to make it back to the client application... but that's another story.

Cucumber & test data management for non-Ruby apps

I'm testing an app that's basically a message-handling application - a message comes in, it's content is analysed, then it's sent somewhere else for processing. The app isn't built in Ruby.
As you might imagine, the main testing approah consists of generating a whole bunch of different types of (quite complex) messages, loading them into the app, waiting a few seconds then ensuring that they get sent to the correct place.
Functionally, the testing's going well, but I've currently got all of the test messages & desired message destinations defined in Ruby code - I'd like to move them to either a YAML file, or (second choice) a database. I'd prefer to use a YAML file over a database because it's easier to version control, and for non-technical testers to edit the message content directly.
Is there a "recommended" way to implement this sort of data management in Cucumber? It sort of smells like a fixtures approach makes sense, but fixtures to me have always involved populating a database from a file and then using the DB for testing, and I'm not 100% sure this is the best/simplest fit for this particular problem.
I believe what you will be most happy with is a Scenario Outline. You could perhaps create a yaml file an load it from a step, but that would not make a very useful test output. What you (I think) would really like is to see each message and its destination sorted by weather it passed or failed. The example below is for Failed Logins, but it gets the point accross.
Scenario Outline: Failed Login
Given I am not authenticated
When I go to "/login"
And I fill in "login" with "<mail>"
And I fill in "password" with "<password>"
And I press "Log In"
Then the login request should fail
Then I should see an error message
Examples:
| mail | password |
| not_an_address | nil |
| not#not | 123455 |
| 123#abc.com | wrong_paasword |
Each Example will turn green, red or yellow depending on whether it worked, failed or was pending.

Resources