Cucumber & test data management for non-Ruby apps - ruby

I'm testing an app that's basically a message-handling application - a message comes in, it's content is analysed, then it's sent somewhere else for processing. The app isn't built in Ruby.
As you might imagine, the main testing approah consists of generating a whole bunch of different types of (quite complex) messages, loading them into the app, waiting a few seconds then ensuring that they get sent to the correct place.
Functionally, the testing's going well, but I've currently got all of the test messages & desired message destinations defined in Ruby code - I'd like to move them to either a YAML file, or (second choice) a database. I'd prefer to use a YAML file over a database because it's easier to version control, and for non-technical testers to edit the message content directly.
Is there a "recommended" way to implement this sort of data management in Cucumber? It sort of smells like a fixtures approach makes sense, but fixtures to me have always involved populating a database from a file and then using the DB for testing, and I'm not 100% sure this is the best/simplest fit for this particular problem.

I believe what you will be most happy with is a Scenario Outline. You could perhaps create a yaml file an load it from a step, but that would not make a very useful test output. What you (I think) would really like is to see each message and its destination sorted by weather it passed or failed. The example below is for Failed Logins, but it gets the point accross.
Scenario Outline: Failed Login
Given I am not authenticated
When I go to "/login"
And I fill in "login" with "<mail>"
And I fill in "password" with "<password>"
And I press "Log In"
Then the login request should fail
Then I should see an error message
Examples:
| mail | password |
| not_an_address | nil |
| not#not | 123455 |
| 123#abc.com | wrong_paasword |
Each Example will turn green, red or yellow depending on whether it worked, failed or was pending.

Related

How can I execute a specific feature file before the execution of remaining feature files in goDog?

I have a few data setup before implementing the remaining test cases. I have grouped all the data setup required to be executed before the execution of test cases in a single feature file.
How can I make sure that this data setup feature file is executed before executing any other feature file in goDog framework?
As I understand your question, you're looking for a way to run some setup instructions prior to running your feature/scenario's. The problem is that scenario's and features are, by design, isolated. The way to ensure that something is executed prior to the scenario running is by defining a Background section. AFAIK you can't apply the same background across features. Scenario's are grouped per feature, and each feature can specify a Background that is executed before each scenario. I'd just copy-paste your setup stuff everywhere you need it:
Background:
Given I have the base data:
| User | Status | other fields |
| Foo | Active | ... |
| Bar | Disabled | ... |
If there's a ton of steps involved in your setup, you can define a single step that you expand to run all the "background" steps like so:
Scenario: test something
Given my test setup runs
Then implement the my test setup runs like so:
s.Step(`^my test setup runs$`, func() godog.Steps {
return godog.Steps{
"user test data is loaded",
"other things are set up",
"additional data is updated",
"update existing records",
"setup was successful",
}
})
That ought to work.
Of course, to avoid having to start each scenario with that Given my test setup runs, you can just start each feature file with:
Background:
Given my test setup runs
That will ensure the setup is performed before each scenario. The upshot will be: 2 additional lines at the start of each feature file, and you're all set to go.

How to keep track of messages exchanged between a server and clients?

My app sends notification to the pc when a new text message is received on the phone. I am doing that over bluetooth if it matters.
(This is relevant to PC side)
What I am struggling with is keeping track of messages for each contact. I am thinking of having a linked list that grows as new contacts come in. Each node will represent a new contact.
There will be another list that grows vertically and this will be the messages for that contact.
Here is a diagram to make it clear:
=======================
| contact 1 | contact 2 ...
=======================
|| ||
========= =========
| msg 0 | | msg 0 |
========= =========
|| ||
========= =========
| msg 1 | | msg 1 |
========= =========
. .
. .
. .
This will handle the messages received but how do I keep track of the responses sent? Do I tag the messages as TAG_MSG_SENT, TAG_MSG_RECEIVED etc?
I have not written code for this part as I want to do the design first.
Why does it matter?
well when the user clicks on a contact from a list I want to be able display the session like this in a new window:
==============================
| contact 1 |
==============================
|Received 0 |
| Sent 0|
| Sent 1|
|Received 1 |
==============================
I am using C/C++ on windows.
Simple approach would be to use of existing file systems to store message as follows :-
Maintain a received file and sent file for each contact in specific folder.
Name them contact-rec-file and contact-sent-file.
Every time you receive or send message.
Append the message to corresponding sent or receive file
first write the size of message in bytes to the end of file
then write the content of the message.
Whenever you need to display messages open the file
read the size of file then read the contents of message using the size.
Note: Using main memory to store message is pretty inefficient as a lot of memory is used if there are more messages sent.
Optimization :- Use another file to store the number of messages and their seek position in send or receive files so that you can read that file at loading time and then directly seek the file to correct position if you to read only particular message.
It depends on what you want to keep track of, If you just want the statistics of the sent and received messages, then two counters for each contact will do. If you just want the messages sent and received by the client, not caring about how they are interleaved, then 2 lists for each client will do. If you also need to know the order of how they are interleaved, then as you suggested, a single list with an additional flag indicating if it was a sent or received message will work. There are other possibilities definitely, these are just to get you started.
Ok, if order matters, then here are 2 more ways that I can think of off the top of my head:
1) in the linked list, instead of having a flag indicating the status, have 3 next pointers, one for next message, one for next sent message, one for next received message. The next message pointer will have the same value as one of the others, but that's just so you can know how they are interleaved. So now you can easily get a list of sent messages, received messages, both, or some other weird walk.
2) Have only 1 linked list/array/table, each entry will include the contact info and the SENT/RECEIVED flag. This is not good if there's lots of other info about the contact that you wish to keep since now they need to be replicated. But for simplicity, only 1 list instead of list of lists. To remedy this problem, you could create a separate list with just the contact info, and put a reference in the messages linked list to this contact info list. You could also create a contacts_next_message pointer in the list of messages, this way you can walk using that and get all of that contacts messages.
And so on, there's lots of ways you can do this.

Joining concurrent/identical HTTP GET requests?

Do browsers join concurrent identical HTTP GET requests? At least, for static or cache-able content?
That is, if something like this happens:
| AJAX/HTTP-GET(resourceX)
| [start download]------------------------------------------->[finish download]
|
| AJAX/HTTP-GET(resourceX)
| [start download]--------->etc...
|
+------------------------------------------------------------------> Time
Will the browser figure out "Hey you're already trying to download resourceX! Don't try downloading it twice, it won't do anything!"?
**Update:
Now of course, I can go to some site and try downloading a big file (e.g., "BigFile"), and click the link twice; this will (duplicately) download both BigFile and BigFile(1). Granted, it's an error on the user's part, but still...
For cache-able resources (e.g., downloading some javascript file), it seems pretty inefficient if browsers couldn't figure out these duplicates...
The browser won't notice. It acts just like regular HTTP traffic. It might cache the request once the first one is finished (if the proper cache-control fields are set), but concurrently, no.

Ruby/Selenium WebDriver - Pausing test and waiting for user input, i.e. user inputs captcha

I'm using the Selenium WebDriver and Ruby to perform some automation and I ran into a problem of a captcha around step 3 of a 5 step process.
I'm throwing all the automation in a rake script so I'm wondering is there a command to pause or break the script running temporarily until I enter data into the captcha and then continue running on the next page.
To build on seleniumnewbie's answer and assuming that you have access to the console the script is running on:
print "Enter Captcha"
captchaTxt = gets.chomp
yourCaptchaInputWebdriverElement.send_keys captchaTxt
If you just want to pause and enter the captcha in your browser, you can just have it prompt at the console to do that very thing and it'll just sit there.
print "Enter the captcha in your browser"
gets
You could also set the implicit wait to decently long period of time so that Selenium would automatically see the next page and move out. However, this would leave the important Captcha step (for documenting / processes sake) out of your test unless you're pretty anal with your commenting.
Since this is an actively participating test requiring user input I would say that making the tester press "enter" on the console is the way to go.
Since you are writing the test in a script, all you need to do is add a sleep in your test i.e. 'sleep 100' for example.
However, it is bad to add arbitrary sleeps in tests. You can also do something like "Wait for title 'foo'" where 'foo' is the title of the page in Step 4. It need not be title, it can be anything, but you get the idea. Wait for something semantic which indicates that step 3 is done and step 4 is ready to start.
This way, its more targeted wait.
This has been implemented in JAVA, but similar technique.Your solution could be found here

Does validation in CQRS have to occur separately once in the UI, and once in the business domain?

I've recently read the article CQRS à la Greg Young and am still trying to get my head around CQRS.
I'm not sure about where input validation should happen, and if it possibly has to happen in two separate locations (thereby violating the Don't Repeat Yourself rule and possibly also Separation of Concerns).
Given the following application architecture:
# +--------------------+ ||
# | event store | ||
# +--------------------+ ||
# ^ | ||
# | events | ||
# | v
# +--------------------+ events +--------------------+
# | domain/ | ---------------------> | (denormalized) |
# | business objects | | query repository |
# +--------------------+ || +--------------------+
# ^ ^ ^ ^ ^ || |
# | | | | | || |
# +--------------------+ || |
# | command bus | || |
# +--------------------+ || |
# ^ |
# | +------------------+ |
# +------------ | user interface | <-----------+
# commands +------------------+ UI form data
The domain is hidden from the UI behind a command bus. That is, the UI can only send commands to the domain, but never gets to the domain objects directly.
Validation must not happen when an aggregate root is reacting to an event, but earlier.
Commands are turned into events in the domain (by the aggregate roots). This is one place where validation could happen: If a command cannot be executed, it isn't turned into a corresponding event; instead, (for example) an exception is thrown that bubbles up through the command bus, back to the UI, where it gets caught.
Problem:
If a command won't be able to execute, I would like to disable the corresponding button or menu item in the UI. But how do I know whether a command can execute before sending it off on its way? The query side won't help me here, as it doesn't contain any business logic whatsoever; and all I can do on the command side is send commands.
Possible solutions:
For any command DoX, introduce a corresponding dummy command CanDoX that won't actually do anything, but lets the domain give feedback whether command X could execute without error.
Duplicate some validation logic (that really belongs in the domain) in the UI.
Obviously the second solution isn't favorable (due to lacking separation of concerns). But is the first one really better?
I think my question has just been solved by another article, Clarified CQRS by Udi Dahan. The section "Commands and Validation" starts as follows:
Commands and Validation
In thinking through what could make a command fail, one topic that comes up is validation. Validation is
different from business rules in that it states a context-independent fact about a command. Either a
command is valid, or it isn't. Business rules on the other hand are context dependent.
[…] Even though a command may be valid, there still may be reasons to reject it.
As such, validation can be performed on the client, checking that all fields required for that command
are there, number and date ranges are OK, that kind of thing. The server would still validate all
commands that arrive, not trusting clients to do the validation.
I take this to mean that — given that I have a task-based UI, as is often suggested for CQRS to work well (commands as domain verbs) — I would only ever gray out (disable) buttons or menu items if a command cannot yet be sent off because some data required by the command is still missing, or invalid; ie. the UI reacts to the command's validness itself, and not to the command's future effect on the domain objects.
Therefore, no CanDoX commands are required, and no domain validation logic needs to be leaked into the UI. What the UI will have, however, is some logic for command validation.
Client-side validation is basically limited to format validation, because the client side cannot know the state of the data model on the server. What is valid now, may be invalid 1/2 second from now.
So, the client side should check only whether all required fields are filled in and whether they are of the correct form (an email address field must contain a valid email address, for instance of the form (.+)#(.+).(.+) or the like).
All those validations, along with business rule validations, are then performed on the domain model in the Command service. Therefore, data that was validated on the client may still result in invalidated Commands on the server. In that case, some feedback should be able to make it back to the client application... but that's another story.

Resources