Trying to perfect my cucumber scenarios - ruby

I know either of these will work, but I am trying to become a better member of the ruby/cucumber community. I have a story that tests if multiple sections of my website doesn't have any links under it, it should not display. So which of these two ways are the best ways to write the scenarios. Once again, I understand either will work but i'm looking for the Best Practice solution. I would normally use option B as they are all testing different "Then" steps; however I have been some research and I'm second guessing myself since I can test all the scenarios with the same given statement and I was reading you should only make a new scenario if you are changing both the "given" and "then" steps.
A.
Scenario: A user that cannot access A, B, C, or D
Given I am a, user without access to A, B, C, or D
When I navigate to reports
Then I see the A header
But I cannot click on A's header
And I see error message under A stating the user does not have access
And I do not see the B section
And I do not see the C section
And I do not see the D section
OR
B.
Scenario: A user that cannot access A
Given I am a, user without access to A
When I navigate to reports
Then I see the A header
And I see error message under A stating the user does not have access
But I cannot click on A's header
Scenario: A user that cannot access B
Given I am a, user without access to B
When I navigate to reports
Then I do not see the B section
Scenario: A user that cannot access C
Given I am a, user without access to C
When I navigate to reports
Then I do not see the C section
Scenario: A user that cannot access D
Given I am a, user without access to D
When I navigate to reports
Then I do not see the D section

I believe best practice is to break down features into their various parts (in this case, scenarios)
Option B is better because it adheres to the single responsibility principle (which of course can be applied to many different parts of code). The way B is written is clear and direct. If you come back to this in 6 months, or a new developer sees this for the first time, you both have a good idea of the goal of the test.
Option A seems to be doing a lot, and although this is an integration test, you should keep the specific parts of code being tested as independent as possible. Ask yourself this, when this test fails, will you know exactly why? or will you have to start digging around to see what part of the test actually failed?
Best practice, in this context, advocates smaller sections of code. If these tests start repeating themselves (DRY, don't repeat yourself), you can start to refactor them (with a Background perhaps)

Granular scenarios are preferable because they communicate the desired behavior more explicitly and provide better diagnostics when there is a regression. As your application evolves, small scenarios are easier to maintain. Long scenarios develop a "gravitational attraction" and get even longer. In a long scenario, it is difficult to figure out all of the setup and side-effects of the steps. The result is an "gravitational attraction" where long scenarios keep growing.
A scenario outline can give make your test both granular and concise. In the following example, it's obvious at a glance that resources B, C, and D all have the same policy, while resource A is different:
Scenario Outline: A user cannot access an unauthorized resource
Given I am a user without access to <resource>
When I navigate to reports
Then I do not see the <resource> section
Examples:
| resource |
| B |
| C |
| D |
Scenario: A user that cannot access A
Given I am a, user without access to A
When I navigate to reports
Then I see the A header
And I see error message under A stating the user does not have access
But I cannot click on A's header

I would replace A B C or D for something more readable, just think that your grandma needs to understand this definition, she wouldn't understand what A B C D mean. so let's put it this way
given a basic user
..
..
then the user cannot see the edit tools
given a super user
..
..
then the super user should see the edits tools
Just try to join those A B C D in something meaningful such a group name, level n, team, etc
then you will use TestUnits for one of each items: A B C D if you will

Related

Why there is no include link between create and delete use cases

I have seen on the internet many examples of use cases diagrams (in UML) as this one:
What I see is that the delete use case does not include the create use case. Even though I can't imagine deleting a user without creating it.
I wonder why it is still right to not use the include ? And I wonder when should I use it and when to not use it ?
If there is Delete-User - - <<include>> - -> Create-User that means during the execution of the UC Delete-User the UC Create-User is also executed, and of course that has no sense.
The expected behavior can be :
Delete-User has the prerequisite Create-User was successfully executed for the same user and Delete-User was not already executed successfully for the same user (after the last Create-User then)
or Delete-User can be executed without prerequisite but if the user does not exist (Create-User was not executed successfully for the same user, or Delete-User was already executed for the same user after the last creation or the user) this is an error case
Bruno's excellent answer already explains why it's not a good idea to include Create into Delete, and what alternatives may be used to express the relation that you explained between the two use-cases.
But in case it helps, here another angle:
A use-case diagram does not represent a logical sequence of activities.
A use-case only represents a goal for an actor that motivates his/her interaction with the system independently of the other use-cases and the system's history. So, the simple fact that a sysAdmin may want at a moment in time to delete a User is sufficient for the use-case Delete to exist on its own.
include shows that a goal may include some other goals of interest for the user. Inclusion is not for functional decomposition where you'd break down what needs to be done in all the details. It's not either to show the sequential dependency. So for Delete, you shall not include what happens before, because happens-before is sequentiality. Inclusion only highlight some relevant sub-goals that are meaningful for the user and that the user always want to achieve when aiming at the larger goal.
finally, a use-case Delete may have perfect sense even if the use-case Create was never performed by any actor, for example because:
the new system took over a legacy database with all its past account, and the first thing that the SysAdmin will do in the new system, will be to clean-up the already existing old unused accounts before creating new ones.
the SysAdmin wants to Delete an account but finds out only during the interaction that the account didn't exist, was misspelled, or was already deleted. These possibilities are all be alternate flows that you would describe in the narrative of the the same use-case.
or if not Create use-case would be foreseen, because the user creation would be done automatically in the background (e.g. based on an SSO), without the actors being involved at all.

Different dashboards based on same analyse run

Sonar-Qube: V.5.1.1
C#-Plugin: V.4.0
ReSharper-Plugin: V.2.0
Due to the long analyse runs I would like to have the following:
Let's assume I analyse my source with the rules A, B, C and D. Now I would like to have a dashboard based on the issues found with rule A and B and another dashboard based on the issues found with rules C and D and the third one basing on all rules. But I don't want to have an analyse run for each of those combinations! Curently an analyse run takes 4 hours!
What you're after isn't possible.
===Edit===
Based on our comment conversation, I'd advise putting all rules in the same profile and setting the severity of "next year's" rules to Info. The teams can easily use the issues page to choose which sets of issues to see at one time.
When it's time to make rule set II official, you can simply upgrade the severities of the relevant rules.

How does Google Docs deal with editing collisions?

I've been toying around with writing my own Javascript editor, with functionality similar to Google Docs (allowing multiple people to work on it at the same time). One thing I don't understand:
Let's say you've got User A and User B connected directly to each other with a network delay of 10ms. I'm assuming the editor uses a diff system (as I understand Docs does) where edits are represented like "insert 'text' at index 3," and that diffs are timestamped and forced to apply chronologically by all clients.
Let's start off with a document containing the text: "xyz123"
User A types "abc" at the begining of the document at timestamp 001ms, while User B types "hello" between "xyz" and "123" at timestamp 005ms.
Both users would expect the result to be: "abcxyzhello123," however, taking into account network delay:
User B would receive User A's edits of "insert 'abc' at index 0" at time 011ms. In order to keep the chronological order, User B would undo User B's insertion at index 3, insert User A's "abc" at index 0, then re-insert User B's insertion at index 3, which is now between "abc" and "xyz," thus giving "abchelloxyz123"
User A would receive User B's edits of "insert 'hello' at index 3" at time 015ms. It would recognize that User B's insertion was done after User A's, and simply insert "hello" at index 3 (now between "abc" and "xyz"), giving "abchelloxyz123"
Of course, "abchelloxyz123" is not the same as "abcxyzhello123"
Other than literally assigning each and every character its own unique ID, I can't imagine how Google would manage to solve this problem effectively.
Some possibilities I've thought of:
Tracking insertion points instead of sending indexes with diffs would work, except you would have the exact same problem if User B moved his insertion point 1ms before editing.
You could have User B send some information with his diff, like "inserting after 'xyz'" so that User A could intelligently recognize this has happened, but what if User A inserts the text "xyz?"
User B could recognize that this has happened (when it receives User A's diff and sees that it's a conflict), then send out a diff undoing User B's edits and a new diff that inserts User B's "hello" "abc".length index further right. The problem with this is (1) User A would see a "jump" in the text and (2) if User A keeps editing then User B would have to continuously fix its diffs - even the "fixer" diffs would be off and need to fix, exponentially increasing the complexity.
User B could send along with its diff a property that the last timestamped diff it received was -005ms or something, then A could recognize that B didn't know about its changes (since A's diff was at 001ms) and do conflict resolution then. The issue is (1) all users timestamps will be slightly off, considering most computer clocks aren't accurate to the ms and (2) if there's a third User C with a 25ms lag with User A but a 2ms lag with User B, and User C adds some text between "x" and "y" at -003ms, then User B would use User C's edit as a reference point, but User A wouldn't know about User C's edit (and thus User B's reference point) until 22ms. I believe this could be solved if you used a common server to timestamp all edits, but that seems rather involved.
You could give each character a unique ID, then work off of those IDs instead of indexes, but that seems like overkill...
I'm reading through http://www.waveprotocol.org/whitepapers/operational-transform, but would love to hear any and all approaches to fixing this problem.
There are different possibilities for realizing concurrent changing of replicas, depending on the scenario's topology and with different trade-offs.
Using a central server
The most common scenario is a central server that all clients have to communicate with.
The server could keep track of how the document of each participant looks. Both A and B then send a diff with their changes to the server. The server would then apply the changes to the respective tracking documents. Then it would perform a three-way-merge and apply the changes to the master document. It would then send the diff between the master document and the tracking documents to the respective clients. This is called differential synchronization.
A different approach is called operation(al) transformation, which is similar to rebasing in traditional version control systems. It doesn't require a central server, but having one makes things much easier if you have more than 2 participants (see the OT FAQ). The gist is that you transform the changes in one edit so that the edit assumes that the changes of another edit already happened. E.g. A would transform B's edit insert(3, hello) against its edit insert(0, abc) with the result insert(6, hello).
The difference between rebasing and OT is that rebasing doesn't guarantee consistency if you apply edits in different orders (e.g. if B were to rebase A's edit against theirs the other way around, this can lead to diverging document states). The promise of OT on the other hand is to allow any order if you do the right transformations.
No central server
OT algorithms exist that can deal with peer-to-peer scenarios (with the trade-off of increased implementation complexity on the control layer and increased memory usage). Instead of a simple timestamp, a Version vector can be used to keep track of the state an edit is based on. Then (depending on the capability of your OT algorithm, specifically transform property 2), incoming edits can be transformed to fit in the order they are received, or the version vector can be used to impose a partial order on the edits -- in this case history needs to be "rewritten", by undoing and transforming edits, so that they adhere to the order imposed by the version vectors.
Lastly, there are a group of algorithms based on CRDT, called WOOT, Treedoc or Logoot, which try to solve the problem with specially designed data types that allow operations to commute, so the order in which they are applied doesn't matter (this is similar to your idea of an ID for each character). The trade-offs here are memory consumption and overhead in operation construction.

Data sharing in GUI, Matlab

now I'm developing a GUI with pop-up windows, so actually it is a workpackage with multiple GUIs.
I have read thorough the examples given in help files (changme, and toolpalette), but I failed to animate the method to transfer data from the new one back to the old one.
Here is my problem.
I have two GUIs, A, the Main one and B that I use it to collect input data and I want to transfer the data back to B.
Question 1:
I want to define new subclasses of handles in A.
lets say,
handles.newclass
how can I define its properties, e.g. 'Strings'?
Question 2:
In A, a button has the callback
B('A', handles.A);
so we activate B.fig.
After finished the work in B,
it has collected the following data (string and double) in B(!)
title_1 itle_2 ... title_n
and
num_1 num_2 ... num_n
I want to pass the data back to A.
Following the instruction, I wrote the codes shown below.
mainHandles = guidata(A);
title = mainHandles.title_1;
set(title,'String',title_1);
However, when I go back to A, handles in A was not changed at all.
Please someon help me out here.
Thank you!
=============update================
The solution I found is adding extra variables (say handles.GUIdata) to handles structure of one GUI, and whenever the data are required, just read them from the corresponding GUI.
And It works well for me, since I have a main control panel and several sub-GUIs.
There is a short discussion of this issue here.
I have had similar issues where I wanted external batch scripts to actually control my GUI applications, but there is no reason two GUI's would not be able to do the same.
I created a Singleton object, and when the GUI application starts up it gets the reference to the Singleton controller and sets the appropriate gui handles into the object for later use. Once the Singleton has the handles it can use set and get functions to provide or exchange data to any gui control that it has the handle for. Any function/callback in the system can get the handle to the singleton and then invoke routines on that Singleton that will allow data to be exchanged or even control operations to be run. Your GUI A can, for instance, ask the controller for the value in GUI B's field X, or even modify that value directly if desired. Its very flexible.
In your case be sure to invalidate any handles if GUI A or B go away, and test if that gui component actually exists before getting or modifying any values. The Singleton object will even survive across multiple invocations of your app, as long as Matlab itself is left running, so be sure to clean up on exit if you don't want stale information laying around.
http://www.mathworks.com/matlabcentral/fileexchange/24911-design-pattern-singleton-creational
Regarding Question 2, it looks like you forgot to first specify that Figure A should be active when setting the title. Fix that and everything else looks good (at least, the small snippets you've posted).

Designing a complex workflow diagram

We've got a surprisingly complex workflow that needs to be monitored by a quasi-technical employees with an in-house webapp. There's about 30 steps, some of which are manual (editing), some are semi-automated stop points (like "the files have been received" or customer approval of certain templates), and some are completely automated (file conversion, search indexing, etc). The flowchart for all of these steps is large and complicated, and three people might be working on three completely different steps at any one time.
How would you present this vast amount of information as usefully as possible to your users? Just showing the whole diagram seems like the brute force solution. But it's big, and it'll likely get bigger as we do more things. Not to mention the complexity necessary to encode this entire diagram in HTML.
I assume you don't want to show these just for entertainment or mockery, but help the users along the way, automating as much as possible, document the process etc. It would probably help if you clearly define the goals or purpose of your app.
I don't see a point in showing the entire workflow, except for "debugging the business rules" or maybe the clients want to see it.
If your goal is to help users do their job, I would present the state of the "project" (or whatever term fits better) is at, and possible transitions to other states.
The State might be multiple mostly independent variables, e.g. one might describe the progress of content - e.g. "incomplete" / "complete" / "reviewed by 2nd staffer" / "signed off by 2nd staffer", others might contain a schedule that is developed in parallel, e.g. "test print date = not scheduled", "print date = not scheduled", "final delivery = tomorrow, preferredly yesterday".
A transition might be "Seint to customer for review", "mark as content-complete", "content modified", etc.
Is this what you have in mind?
I propose to divide your workflow in modules and represent the active state for each module.
A module is a subset of your main workflow. For example it could be divided by tasks, person, roles, department, etc. This will greatly simplify the representation of the workflow. Let's says someone is responsible for data entry at many critical moments. We can group all his tasks in one module (or sub-workflow) containing the same activities, inputs, outputs and conditions. Modules could be inter-dependants and related.
A state is where we are located in a module. In simple workflows there is only one active task. In real life we are multi-threaded! So maybe in one module many states could be active at the same time. The state also includes active inputs, outputs and memory bits.
An input is something required to perform an activity for evaluation a boolean condition. It could be a document, a piece of data, a signal...
An output is something resulting from a task: an information, a document, a signal...
Enough definitions?
Then simply convert your workflow into a LADDER LOGIC and you have your states!
See Ladder Logic definition on Wikipedia
You display only active states:
Active task(s) for the module
Inputs required / inputs confirmed
Output required / output realized
Conditions to continue
Seems abstract?
Here is a small example...
Janet enters data in the system. She manages the green tasks of the diagram. We focus only on her work, not other tasks. She knows how to do 16 tasks in the workflow. We are waiting the following actions from her to continue, and her Intranet dashboard says:
Priority 1: You must send a PO to order enough pencils for the next month based on the sales report.
Task: Send a purchase order
Inputs: Forecast report from the marketing department
Outputs: PO, vendor, item, quantity
Condition for completion: PO sent and order confirmation received from supplier
Priority 2: You must enter into the financial system the number of erasers rejected by production
Task: Data entry
Inputs: Reject count from production
Outputs: Number of rejects
Condition for completion: data entered and confirmed
We do a lot of troubleshooting on automated production systems having hundreds of thousands ladder steps (the workflow is too complex to be represented in a whole). When the system is blocked we look at each module and determine what inputs are missing to activation task completion.
Good luck!
This sounds like the sort of application for which BPEL is suited.
Of course you don't want to re-architect your system right now. But there are a number of BPEL implmentations out there, some of which include graphical editing tools. One of these might help you in your current situation, because they are good at handling scope and hiding detail. So I think you might derive benefit from drawing your workflow as a BPEL diagram even if you don't do anything else with the language.
The Wikipedia page lists several of the available implementations. In addition, Oracle's JDeveloper IDE includes a BPEL Diagrammer as part of its SOA suite; unfortunately it is no longer part of the standard install but it is still available. Find out more.
Try doing it in layers. You have the most detailed layer done, now add additional docs with the details hidden, grouped into higher-level business processes. Users should be able to safely ignore some of those details, but it's good for them to have visibility of how their part fits in to the whole.
You may need more than one higher-level document.
You can use Prezi to present this information to users in a lucid manner.
Split and present the work flow into phases such that the end user is easily able to identify the phase he is currently in.
Display as many number of phases as the number of inputs. The workflow starts with 6 different inputs so display the six different buttons on screen enabling the user to select the input that he wants.
On selecting the button zoom into the workflow depicting the next steps. This would also help the user to verify the actions that he has done so far to reach the current states.
This would also help the user to verify the actions that he has done so far to reach the current states. But this way of presenting could become cumbersome for the users as the number of steps that he has completed goes up. Say the user has almost reached the end of the workflow. To check for the next step he should go through all the steps which might frustrate the user.
To avoid this you can split the complete work flow chronologically into 3-5 phases. The phases should be split logically. The ultimate aim would be not to overwhelm the users with the full work flow. Personally i would try to avoid the task involving this workflow if presented the way you have shown. No offense. I bet you also feel the same.
Could give you a better picture if you could re-post the image after replacing the state names with numbers.
I'd recommend having the whole flow documented somewhere, but in terms of what is distributed to users, how about focusing on task-oriented flows? No one user will be responsible for the entire process I would imagine.
For example, let's say I have 2 roles, A and B, and 6 tasks, 1 through 6, executed in order. Each task may have multiple steps but is self-contained (e.g. download the file, review, run process, review again, upload). A does the even tasks and B does the odd tasks.
A would need to know about those detailed steps that comprise tasks 2, 4, and 6 but not about what goes on in 1, 3, and 5. So hand A a detailed set of flows for the tasks he is responsible for, along with a diagram that treats each task as a black box.
If the flow can't be made modular in this way, you may want to review the process itself to see why it's so complex.
How about showing an example of a workflow scenario, that is, showing the transitions in one possible passing through the workflow? You could cater this to a specific user profile and highlight the pertinent states, dimming the others. This allows them to get a clear idea of the transitions by seeing a real-life example.

Resources