I'm currently evaluating a few different issue management tools, and have it narrowed down to TargetProcess, Redmine and Youtrack. For what I need TargetProcess seems to do everything with a lot less need for customisation, however as the only person working on QA at a small startup, I'm trying to make sure that as much of the process is automated as possible.
YouTrack has a workflow editor which allows you to write validation rules for your issues, and would therefore allow me to specify that you can't move an issue of a certain type into a certain state without having a related issue of another type, for example you cannot move a feature out of "New" without having a set of related requirements in the form of test cases.
While this isn't as ingrained in Redmine, there is a plugin which allows you to write these types of rules. I haven't however been able to find anything of the sort for TargetProcess, and worry that the ability to perform this sort of deep customisation will add an extra time-sink as I have to spend more time on this process myself.
Is there any way to achieve this in TargetProcess, be it using a plugin or an external service? I can see that I could hook something up to the REST api, but this would make it difficult to give feedback as to why an issue had not been progressed. TargetProcess is an impressive tool, however it is very expensive, and unless it does everything I want, it is difficult to justify the outlay.
TL/DR
Is there a mechanism for writing business rules into TargetProcess such that the proper QA process is enforced, so I can concentrate on providing value through QA rather than process management?
There are no customized Business Rules in Targetprocess so far. The only thing that exist is a Mashup that allows some rules customization related to custom fields
https://github.com/TargetProcess/TP3MashupLibrary/tree/master/Custom%20Field%20Constraints
Custom Business Rules are requested by many people and we are going to start development this year.
Related
I am currently developing an application that parses and manipulates MIME messages wherein these messages are a central part of the domain model. Although I have already implemented the required functionality, for the moment, for parsing these messages, it seems unnecessary trying to reinvent the wheel would I need to add additional MIME features in the future. I could simply use an available library such as MimeKit which probably does the job much more efficiently and seems like the more robust way to go with. At the same time I feel hesitant to this idea for a couple of reasons:
I am fairly new to software architecture but what from what I've gathered online the consensus seems to be that the domain objects should not have any external dependencies since they model a domain that is specific to the business. And so if the business rules change it wouldn't be a good idea to have your domain model be dependent of an external library. However, since MIME is a standardized protocol this shouldn't be a problem, but that leads to the second point.
Although MIME is a standardized protocol, it has come to my knowledge that the clients from which my application receives these messages does not always fully conform to the RFC specifications. I have yet to come across a problem regarding the MIME format of the messages but with that in mind I feel as though there's no guarantee that I won't stumble across problems down the line.
I might have to add additional custom functionality regarding the parsing of the messages. This could however be solved by adding that functionality on top of the imported classes.
So my questions are:
Would it under normal circumstances be a valid alternative to use an external library for standardized protocols as a part of the domain model? It doesn't seem right to sully my domain- and application-layer with external dependencies.
How should I go about this problem with regards to my circumstances? Should I create an interface for the domain model so that I can swap it out with another implementation if needed in the future? This would require isolating the external dependencies in a class and mapping all the data to fit the contracts for the application layer which almost seems like more work than implementing the protocol myself. Or should I just implement it myself and add new features successively just to make sure that I have full control of the domain model?
I would highly appreciate your input.
Your entire question boils down to the following flawed thinking:
I am fairly new to software architecture but what from what I've gathered online the consensus seems to be...
Why let consensus make your decisions for you?
Who are these people who make up this "consensus"?
How do you know they have any idea what they are talking about?
Trusting the consensus of unknown sources seems like a terrible way to make decisions for your project.
Do you want to write software that solves real problems? Or do you want to get lost in the weeds of idealism and have your project fail before it even gets out of the design phase?
Do what makes sense for you.
Can someone please tell me which of the following has more advantages - plugin/workflow ?
As the Post in Custom WorkFlows vs Plug-ins in MS CRM seems to be a little outdated, i can share my experieces with you.
Workflows:
Contains certain Logic you provide by only "clicking" on the actions
you want to be made (Like Update, Create, etc.)
Can be run "onDemand"
Can often be handled by KeyUsers and do not need an explicit developer
Should not be used for complicated logic as the iterface often does not provide the possibility to add additional logic afterwards
If used for complicated logic (as statened above), refaktoring or changes are often very hard to integrate!
In current Cloud organisations you get the Information that you SHOULD not use these anymore, but to swith to MS Flow. (VERY IMPORTANT!!)
Plugins:
Custom Code - so you can provide very complicated or also simple server-side logic
You need a(n experienced) developer
Can perform faster than workflows!
nearly everything you can do with a Workflow can be done by a Plugin (or job) but not visa-vera
You have the possibility to trigger the plugin as well as hand in Data (Parameters!) as you can create your own "Messages" (With this i mean you do not only use Update, Delete and Create, etc. as Messages for Plugins, but you can define your own Message Steps by creating "Actions" in the Prozess Section in your Dynamics Organization. There you can define Input- AND Outputparameters. These custom Messages can be also triggered on demand!!! For instance by using javascript. Guid how to use/create custom Messages (Actions))
In my experience Plugins are mostly the better suited solution if you have (even a little) complicated matter, as workflows are far less maintainable. Simple "1 Liners" can often be replaced by workflows.
Nevertheless each developer/consultant has to suggest his own way for the improvement/developmet of his/her organization.
#Community: Feel free to correct me, if i am wrong anywhere or if you have different experiences.
For a large application that will be developed, we are in the process of selecting a Validation framework. Although the Workflow Rules engine is not strictly a Validation framework, it can be used by itself without using the Workflow foundation. It appears to give flexibility of specifying the rules in a database that is used at runtime. However, it appears that you cannot specify rules in the code.
If greater flexibility is one of the requirements (not necessarily that the rules need to be edited by Business analysts), which of the two would you prefer and why?
It quite matters in what your exact requirements are. 'Being flexible' is by itself not a good requirement, because it isn't measurable. It's very subjective if something is flexible.
I'm not familiar with Microsoft Business Rules Engine, so I can't comment on that. I am however very familiar with the Microsoft Enterprise Library Validation Application Block (VAB) and it has served me well over the last year. It has several features that make it flexible for the situations I’m dealing with:
It allows both defining validation declarative (using attributes) and using an external configuration file (which is very useful when your entities are generated).
It contains a set of default validators that can be used and custom validators can be written.
It allows validation of single properties and allows you compare multiple properties as a group (by using self validation or custom validators).
It allows validating objects in isolation, as well as object graphs.
It allows you to define multiple 'rule sets' which for instance allows you to define an set of hard errors and a set of warnings.
VAB (or the Enterprise Library as a whole) allows you write a custom configuration source (IConfigurationSource) which allows you to define your business rules wherever you want. So in theory you could store them in the database, however you will have to write such configuration source yourself, and this will be quite some work. Especially when you want your business analysts to be able to define validations and update the database with some sort of editing tool, it think this will be quite hellish to accomplish with VAB.
If there really is a requirement of the business people to write those rules themselves, hopefully you got process supporting this requirement. For instance, how are they going to test whether their changes are correct? You wouldn’t want your business analysts to make changes directly to the production database.
But please give this a thought. If the rules are going to be tested, I expect those rules not to be changed directly in your production database, otherwise you would be testing after the fact. So, the analysts would be changing the rules in their own environment and you’d probably be publishing the new rules from this environment to a test environment, and later on to acceptation environment and eventually to a production environment. And if you’re taking these steps, should you still use a database to store those business rules? Using a configuration file would make it much easier than using the database. Deployment would simply be a file copy instead of copying the content of multiple tables from database to database.
I'm interested in what others have to say about the validation frameworks they know well.
Recently I had a project in which I had to get some data from particular software system to a portlet. The software used a database, and I spent a fair bit of time modeling the data I wanted and then creating a web service so that my portlet could grab the information.
Then it suddenly struck me that I was wasting my time. I grabbed BIRT, tossed it into a portlet, and then just wrote some reports that directly grabbed the necessary data from the database. I was done in an afternoon.
I understand that reporting is a one way street, but this got me thinking. Reporting tools can be very effective for creating reports (duh) from your actual data, but when you're doing this you're bypassing your model which except in simple cases is not a direct representation of your data as it exists in your database.
If you're writing a data-intensive application and require the ability to perform non-trivial reporting, do you bypass your application and use something like BIRT or Crystal Reports? How do you manage these tools as part of your overall process? Do you consider the reports you write as being part of your application and treat them as such? A report is a view and a model and a controller (if you will) all in one big mess, how do you deal with and interpret and plan for that?
Revised question: it's possible and even common that a report will perform some business calculations that in a perfect world you would like to have contained in your application. This can lead to a mismatch of information given back to the user. On the other hand, reporting tools make it so easy to gather and display information that it's hard to take a purist's approach and do everything from within the application. Are there any good techniques for ensuring that the data in your reports matches the data that you might be showing in the regular GUI?
I see reporting as simply another view on the data, not a view/model/controller in one (well, maybe a view and controller in one).
We have our reports (built in sql 2008 reporting services) consume a service in our application layer to get data (keeping with our standard, that data access is in a repository). These functions could do a simple query or handle very complex processing that would be a nightmare in your reporting evironment or a stored procedure. In practice, we find this takes no longer than coding up some one-off stored procedure that will, as your system grows and grows, become a nightmare to maintain.
Treating reporting as simply a one-off or not integrating into your application design is a huge mistake.
Reporting is crucial. Reporting is mostly crucial to share values collected in one system to external users, e.g. users not directly using the system (eg management for sales figures). So reporting is a lot more than just displaying facts and figures and is something central to almost every system that drives a commercial.
At least the more advanced systems allow you to enhance them: with your own reusable "controls". Even a way back can be implemented - if you just use the correct plugins. Once I wrote a system to send emails out of a report, because the system did not allow for change. It worked - though it was not meant to be used that way ;)
Reports make a good part of the application, and you gain a lot freedom if you make reports changeable for your customers. Sometimes you come up with more possibilities than you thought of when you built the system in the first place.
So yes, for me reporting is part of the system.
Reports are part of your app but because they are generally something a user will have strong ideas about than, say, your data capture UI, I'd sacrifice purity for convenience/speed of delivery and get back to "real" coding... :-)
As soon as you've done a report, users want another one or change the colour or optional grouping or more filtering or... something that takes you away from whizzier stuff... so I don't bust a gut maintaining purity.
This is a fine line indeed. You don't want to spend too much time building reports (that users want you to change all the time anyway) but you don't want to duplicate logic by putting business logic into your reports! With our reporting products at Data Dynamimcs I think we have reached a happy medium between these two tradeoffs.
By using the ObjectDataProvider (see links below for more info) you can bind the report directly to business objects (plain old objects) so you don't have to bypass your business layer for getting data. At the same time we provide a way to reference and use functions from other libraries in your report. This way if you have some code configured already to do some business logic calculations you can reuse those functions directly within your report. You can see an example of this in the links below too.
Binding to Objects for your Data (see "Object Provider" section): http://www.datadynamics.com/Help/ddReports/ddrconDataSetAndObjectDataSource.html
Adding Custom Code to your reports Walkthrough: http://www.datadynamics.com/Help/ddReports/ddrwlkCustomCode.html
Using Custom Assemblies (referencing shared libraries/dlls from your report): http://www.datadynamics.com/Help/ddReports/ddrconCustomCode.html, and http://www.datadynamics.com/Help/ddReports/ddrtskCreatingAnInstanceMethod.html
Scott Willeke
Data Dynamics / GrapeCity
The way I've always worked with reports is to consider part reports as part of the code-base, and stored in the source along with the application. In some contexts, reports are more important than the application, in that management makes business decisions off of report data, having the wrong information can cause them to cancel a product line, cancel a campaign, or fire a sales person. Obviously, this depends highly on your management and your application.
Regarding keeping your model consistent, this is a bit trickier question. One way to ensure consistent model between reports and your application is to use stored procedures (or views) to retrieve data, depending on your application's architecture.
I have some doubt about aspnet_membership. My database team of my company dosen't let me use aspnet_membership. Because this feature use some stored procedure to install some thing that break the policy of my company. They said this feature could make some risk but I don't know what risk that I'll get if I used it.
Does anyone have any idea or reason about this issue?
Your "database team" is obviously paranoid. What sort of security risk do they anticipate if the aspnet_regsql.exe tool creates a database? I can understand that many Administrators feel uneasy using wizards because they want to know exactly what is going on behind the scenes. In this case, the command line tool allows you a high degree of customizability and should be preferred over the wizard mode.
Perhaps you and your database team should read up on the published implementation of the tool, rather than being obstinate about which you are ignorant. If you still find "Creating the Application Services Database for SQL Server" to be overkill, it is quite possible to create your own MembershipProviders and PersonalizationProviders and incorporate them with your own database structure.
I know this is an old question, but (as most good questions do) it made me think.
I can see the perspective of your database team in certain scenarios:
You are subject to compliance regulations that require complete separation of duties and thorough step-by-step documentation of every change made to an environment. This means that you can't just run any utility (no matter how well-documented or tested) on your own; you must hand it off to another team for approval, and that team must understand (in theory) every potential impact.
Perhaps there is a very granular object by object security policy in place, and your database team feels that auto-generated table structures would violate this.
Not a security breach, but depending on naming standards you have in place, the tables/procedures created by the tool may violate those conventions.
Perhaps you work in finance or healthcare and there is a specific set of security requirements that must be met. Either through ignorance or research, your database team believes that the membership database will not meet these requirements.
I've worked with many companies that have draconian security policies that often cause a great deal of extra time to be spent. Conversely, the consequences of not following those policies can have such dire consequences (multi-million dollar fines here in the US) that caution/paranoia usually wins out.
In summary: if the ASP.Net authentication/authorization scheme is a good fit, then invest the time in educating the database team on its internals so they can be comfortable with it. Listen to their objections; DBAs can be over-protective of their environments, but they often are responsible for key assets of a company.
As #Cerebrus pointed out, there are many different ways to use ASP.Net security. Try to leverage the parts that make sense. If your database team still doesn't "get it" (or raises valid objections) there are plenty of other ways to implement robust security in ASP.Net, they just require more effort.