Is there a way to log / monitor the time taken for rule in a Drools rule set?
Is there a way to make sure that one rule is not executed more than once(It seems to be happening in my case)
What are the general guidelines on improving Drools performance?
Currently I am using a one single DRL file with 100 odd rules.
Any additiional information you need will be provided.
If your goal is to log/monitor rules execution, you can use either the Event Listeners, like AgendaEventListener (http://docs.jboss.org/drools/release/5.3.0.CR1/drools-expert-docs/html_single/index.html#d0e2003) or activate Drools MBeans to monitor a Drools application using any JMX console of your choice. MBeans will give you per rule stats and summarized stats.
How to improve performance is a very broad question and the answer would be much longer than I can answer here, but the short version is: a rules engine like Drools is a relational engine similar to relational databases, but with different goals and optimizations. Nevertheless, many of the same principles apply, and you can improve performance by writing rules with tighter constraints first, sharing constraints among multiple rules and making proper use of inference/chaining.
Just FYI, if you are in the San Francisco area, we will have a free Drools bootcamp (but you still have to register) next week during the Rules Fest (http://rulesfest.org/html/home.html) where we will present many of related topics.
The only pitfall I found fundamental while working with drools is about when block authoring. Don't use computations in when blocks.
This is what drools says even about getters!:
Person( age == 50 )
// this is the same as:
Person( getAge() == 50 )
Note
We recommend using property access (age) over using getters explicitly (getAge()) because of performance enhancements through field indexing. (source)
Droolsassert testing library can help you detect rules which are triggered more than once and log/monitor time taken by each rule.
Related
I am looking to re-write (part of) an application that suffers from a lot of code bloat and performance issues, mainly in the Front-End side.
The biggest challenge of that application is that the data has to abide by a relatively large (hundreds) set of business rules.
The rules are versatile. Based on events (mainly changes in data), and depending on the state of the underlying business object, the rules can restrict or enforce what data is selected, disable fields or hide them entirely, whatever the business needs really.
I am looking at solutions on how to improve it. There are several Business Rules Engines (such as Drools or Camunda) which seem great solutions but for another issue. Those seem focused on Business States Changes rather than actual field operations/validation, where I am here looking for a solution that can;
Back-End (Java Spring); validate that the input that is attempting to be saved fulfills all the applicable rules.
Front-End (Reactive Typescript); also validate the input in real-time (to provide feedback to users), but on top of it have actions. If a rule says if fieldA.value = 'one' then fieldB is empty and disabled, I would like the form to actually be modified so it fulfills this rule already.
A common solution to both would be ideal but given the different technologies, two different designs having the same input/output would work fine.
My questions are;
Is there a better-fitting term than 'Business Rule Engine' for what I'm attempting to do? To get more pertinent results.
What kind of industry-robust pattern(s) could help manage this use-case?
I'd need to calculate the price for a service. The price is depending on a lot of rules which could be something like:
day of week the service is consumed
day of time the service is consumed (can overlap)
type of customer (rebate for members)
number of services consumed
combination of services (cheaper if service a and service b is consumed at once)
cost cap
minimal price
Some of those rule might contradict (i.e. a rebate would result in a price below minimal price) and some can be combined. Also the rules based on time criterias are a bit tricky as the price needs to be calculated on the currently valid price: if price is:
5$ from 09:00 to 12:00
7$ from 12:01 to 08:59
and you consume the service from 11:00 to 13:00 then the total price is 12$ (5$ for first hour, 7$ for second hour)
Any hints on how one can implement this? I was thinking about using a rule engine but not sure how i would start this.
I looked into https://github.com/ulfurinn/wongi-engine and https://github.com/Ruleby/ruleby so far.
Also i found a similar question where one reply suggests to actually implement it in pure ruby: Calculating Prices based on Rules (Ruby Rule Engine)
The pricing model is not up for discussion unfortunately...i did not decide to have something that complex:-(
One of the aspects you may take into consideration on top of the complexity of the rules is how often those rules can change and how fast you need to be able to react to those changes. I’ll ask myself the following questions:
You want to edit the rules on the fly and hot deploy the changes to production?
You want the business to be able to execute CRUD operations over the rules?
Answering those questions will help you also to decide what rule engine better fit with your final solution. There are quite few implementations of rule engines out there, some are lightweight and really easy to use but won’t allow the possibility of editing the rules on the fly, other are heavy and a bit hard to use but will provide editors and tools that will help you to implement a solution more robust and flexible for the customers and the business people.
There is a lot of good approach for rule engines you can use, Drools is one of then and it will allow you to define rules in a fair easy way. Drools also come with a tool named Guvnor that, with the proper configuration, will allow you to execute CRUD operations to rules and deploy it to production on the fly. Drools also allow you to create time activation rules that will make your life easy with those date/time base rules.
There are a lot of rule engine frameworks out there you can use but you’ll have a lot of work to do in your side to make those rules available no matter which framework you choose.
You can see some samples of rule engines in this page: http://tech.gaeatimes.com/index.php/archive/top-10-java-business-rule-engines/
If you don’t want to go with those heavy rule engines, we are creating a lightweight rule engine that is really easy to use. Is a Java base rule engine and the rules will be embedded in the code but I think have all the functionality you need to build your solutions using our Group/Rule hierarchy execution combined with the execution strategies, if you want to read more check EZ-Rules documentation # Opnitech: http://opnitech.com/content/products/ez-rules/ code is available # Github - EZ-Rules
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
I currently work in a small business (15-20 employees, 5 programmers) where most projects are custom built CMS and a few web applications products.
Since I started working there, I have worked on many projects, but specifications for each project vary a lot. Sometimes we get a little detail, a Word document telling what the client wants, and what we are suggesting (suggested form fields, a short description of display, etc.). Sometimes almost nothing except "do what you think is the best approach for this project/module/request".
My question to you guys, who might work in different kind of businesses, is: How (huge pile of paper? Word docs? Visios?) and what kind of information do you get from your superiors, managers, teamates when starting a project (plenty of analysis, drawings, etc.)? How much detail do you get on this?
Hope my question is clear enough, thank you.
Specs..that's kind of funny...how about never :(.
Seriously a lot of companies assume specs aren't needed, its absolutely unacceptable but this is how it is in a LOT of companies. They assume a one liner and the programmer knows what the program should do, the inputs / outputs and so on.
Unfortunately in my case I have to actually help write the specs..and Im the programmer :(.
I mostly get a lot of verbal direction and I use a voice recorder to record the conversation and transcribe it when I am done. I write my own specs from my customers' words.
Then, as a good consultant should, I take the writeup back to the customer and verify it, and get a signature and build it, and they live happily every after! (no they dont, they change their mind a 100 times)
It can vary depending on what group the work falls under:
Support request - If the change will take a short period of time and is fixing something broken, there is this group. This could be as simple as, "Add Bob to the list of authorized users for that ancient form" where the form is something written years ago and aside from adding and removing users, it isn't touched for fear of breaking things.
Service Advisory Committee request - Items that are up to a few days are in this group as these are kind of like mini-projects as the request may be to create a new form or portal for a group. This could be upgrading some 3rd party software where we have some customizations that make the upgrade not necessarily a simple thing for Operations to do.
Project - In this case there are usually a few Word documents and/or e-mail threads that help nail down requirements in terms of scope, budget, and time. These can take months though there is something to be said for having a prototype to change rather than creating the initial prototype to tell if requirements are really met or not. Course my current project is over a year old, still has a few more months to the timeline and already has a successor coming after it is done,i.e. there is a Phase II to go after Phase I.
Uber project - These merit their own group of documentation and are the million dollar, multiple company projects that usually try to document everything up front rarely works out well here. Thus, there is some adoptioon of agile for these but there are still some growing pains to go through as how we use agile matures. Think installing a dozen modules of some off-the-shelf software that requires both internal and external developers to customize the suite for our specific needs as the software is supposed to be very robust, flexible and help save lots of time and money on how people otherwise do their jobs generally. Think ERP or CRM for a couple of examples here.
We are a 16-person company that creates and supports customized software for small retail shop owners.
The projects we get fall into three general categories (as related to specs):
"Here, automate this form." A sales person explains that our customer only wants this form to appear where they can fill it out and print it to make it look professional to their customer. Our specs is a single piece of paper that looks something like an order form or report. This is always false; they want pop-up lookups, automatic updating from other sources, and "while you're at it" add-ons that more than double the time. These, we've learned to just live in the moment and let the project take its course. By the time we're done, the program doesn't look anything like their original form.
Small changes. Like a simple e-mail explaining that the background color is stale, or a request to sort a report by a different column. These, we just do as time allows.
Big company integrations, where we're tasked with making our software work with some big outfit like Intuit (QuickBook) or FedEx (shipping rates). These often have well thought out documentation and sample code. We get 100's of pages in word documents or pdfs. The problem with these is when their specs are wrong. We find out about inaccuracies when we try to test or certify our integration. In these instances, we usually take longer in certification than we did to originally develop the processes.
In all cases, the real trouble is when a sales person promises a solution to the customer before even asking a programmer what it would take. As recently as 2 weeks ago, a sales person got into real trouble and had to issue a refund (that person is no longer with the company).
None - at least not from management.
Instead, as a developer (and particularly one leading a software project right now), I'm expected to contact my users/customers/etc and work directly with them to come up with our specifications and requirements. The documentation I do request from my team is only what will be useful to the team. I am lucky in that management rarely requests a document that doesn't make sense or won't provide some use to our project.
I currently have a half-dozen or so specs each 60-80 pages. One of them is 80 pages with no table of contents. Good times.
Our Product Managers and senior engineers prepare three planning docs for our data management software projects.
High-level requirements: 1-to-3 sentence descriptions of hardware/software supported or specific feature for this project. (10-15 pages of Excel-like grids)
Technical details: Engineering implementation of each high-level requirements. Up to a page for each, depending on amount of detail. (30-40 pages of filled-in feature details)
Business agreement: Summary of 1 & 2 with engineering schedule and Product Mgmt's market analysis. Everyone signs off on this. (5 pages analysis, 20 technical)
I haven't seen work flows or other Visio-like details in our specs. The prioritized requirements and schedule prove critical, so we understand when to lop things off to save development and testing time.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
I ask about the pattern, not framework. This is kind of follow-up to a question on UI auto-generation.
Do you believe in the concept of UI auto-generation from metadata?
What kind of problems can be approached this way?
The question arose when I've created a small library to support my student projects, which generates interactive CLI in runtime based on object's metadata. And I think CLI it generates is quite decent.
On the other extreme is the Naked Objects Framework, which is rather universal, but UI it generates is horrible, IMO.
It's clear, every problem is specific and needs specific UI, but maybe there are several classes of problems where auto-generation is acceptable?
Yes, I believe the concept of metadata-based auto-generated applications is very sound - mainly because it drastically reduces development time and improves code quality by reducing the massive redundancy you have in most applications where each domain data field is represented in the database, in the model, in the UI, and often also several times in various mapping layers.
I think the future is auto-generated apps that can be modified wherever necessary. Currently, this is AFAIK not really possible; for example, Rails only allows you to fully customize the UI when you use static scaffolding, which basically means code generation, i.e. many further changes in the domain model are then not automatically represented in the UI because the duplication has happened when the code was generated.
I believe the first framework that manages to combine complete auto-generation with complete modifiability afterwards will become the de-facto development standard to a previously unknown degree. Though most likely we'll get there in small steps so that there will not be such a single dominating framework.
Take a look at JMatter, which is a rather better-looking implementation of Naked Objects.
http://www.jmatter.org
There is also Chris Muller's work on MAUI, and Lukas Renggli's work on Magritte (both Squeak /Smalltalk)
We have lots of generated UI in the configuration part of our apps. All those lists that are around forever and changed once in a blue moon by a system administrator.
I find that most applications with a database back-end tend to have a bad design from an OO and NO perspective, as already shown in the NO book by Pawson and Matthews.
Re: qn #1 ... Do you believe in the concept of UI auto-generation from metadata? ... I'm definitely going to answer 'yes' to your first question, being one of the committers to the Naked Objects (Java) framework and writing a book on DDD + NO.
The question mentions metadata. I think this is key to NO being able to succeed. In the latest version (which will be going beta in Feb) the metamodel has been opened up so that it is very extensible, either so you can write your domain model following your own programming conventions/annotations, or, potentially so that more sophisticated viewers can look for their own metadata to provide more sophisticated views. (For example, consider that if an object implemented a Location interface then it is displayed in a google maps).
Regarding qn #2 ... what kind of problems can be approached this way ... we've always said that NO is more suitable for "sovereign applications" (transactional, operational systems ones used internally within an organization) to "transient applications" (like an airport kiosk, say). An NO GUI does require that the user is familiar with the domain, otherwise they won't know what they are looking at.
What's missing still is sophisticated viewers, of course. You are right about the NO GUI, it is definitely low fidelity (though the .NET version is a big improvement, see recent infoq.com article). On the Java side there is a sister project called scimpi.org that has a lot of promise though... it provides a basic web GUI for free but lets you hand-craft web pages as necessary and incrementally. I'm also working on an Eclipse RCP GUI that'll work similarly.
The other thing to add to this though is that the NO approach has value (I believe) even if you choose to write a custom GUI and/or presentation layer. That is, you can use it as a design tool for building a very solid pojo domain layer, and then skin it as you will. Trouble is that NO was never originally sold in those terms, so most will see the NO pattern as an all-or-nothing affair.
Dan
One way to look at this is to consider the difference between the user interface you get from something like Toad or MySQL Browser, where the user interface is directly constructed from the tables and their associated meta data, and the user interface that a skilled designer would develop for the actual application. IF there not too disimilar then it should be fairly low hanging fruit for an auto-generation framework.
As you say there are classes of problems which will work quite well with this kind of auto generation and some which wouldn't. To my mind the key things are how well the implementation model (or portion thereof) which you are exposing in the user interface maps to the conceptual model of the user. Secondly how well can the behavior of the application can be expressed through a limited set of user interface components (assuming this is a general purpose UI generation framework).
This article "Universal Model of a User Interface" may be of interest .
I think the idea of automatically generated UIs has a lot of potential especially for your average form-and-table layout database user interface. However, even there a human needs to be in the loop, having the ability to override the output without it being overwritten with the next regeneration.
I suspect automatically generated UIs would be more successful today if interaction designers were more involved in developing the generation algorithms. My impression is that historically the creators of these systems don’t know what kinds of UI-related metadata to include or how to use it. Specifying labels, value ranges, formats, and orders for fields is a start, but more high level information is needed. Sufficient modeling of the tasks and user roles in particular tends to be lacking, along with some basic style-guide-level principles for UI.
Oracle’s Designer 2000, for example, was on the right track in including not only the entities and relations in the model, but also the tasks in the form of a functional hierarchy. Then they blew it by misapplying this metadata (e.g., assuming that depth is always preferred to breadth) and including fundamental flaws when generating the UI (e.g., only one primary window can be opened at a time). The result was IUs that were not even consistent with Oracle’s own Applications User Interface Standards.
Getting a basic UI up quickly that lets the customer try out the system and create test data must be of value. Naked Objects frameworks can help for the “boot strapping” even if you have to have replace it with “hand crafted” UI before you ship.
In most system I have worked on, there have been lots of simple housekeeping tables. All these tables need a UI to edit and view them etc. There is also great value in these simple editors being consistent. Here a naked Objects framework could save a lot of time, even if the main “day to day” UI is “hand crafted”
I have seen a couple of failed projects (cases where I was brought in as a rather expensive consultant to help architect the replacement) which used the "naked objects" approach (not the framework, AFAIK) - all with simply atrocious UIs, and worked replacing a lot of the UI on one project which, in its original incarnation, had a similar approach (the entire application was a tree of objects accessed through context menus and property sheets - this was NetBeans 2.0 circa 1998 - IDE as a giant hierarchical JavaBean).
The bottom line is, your users don't care about your architecture, they care about getting what they need to do done in the most comprehensible-to-mere-mortals set of interactions you can come up with. If that happens to align with your architecture, you are having a lucky day - but it really is serendipity. Trying to force users to care (or even know) about your architecture is a recipe for software nobody wants to use.
Code generally needs to be designed around two not-always-compatible goals:
Maintainability - people who didn't write the code can understand the code
Stability and performance - i.e. the activities the code asks the computer to physically do are both possible, and can be completed within a reasonable time frame
The abstractions and code structures that it makes sense to create to meet those two goals very, very rarely map exactly to user interface elements of any sort. Sometimes you can get away with it - barely - if your audience is technical. But even there, you are likely to please more users with at least a "presentation layer" adapter layer on top of the architecture that makes sense for programmers and machines.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
I work for a CMMI level 5 certified company and one thing I hate about is the amount of documents we prepare (As a programmer I already hate documents). We have lots and lots of documents like PID(project initiation doc), Business requirements, System requirements,tech spec, Code review checklist, issue logs, Defect logs, Configuration management plan, Configuration management check list(s), Release documents and lots...
Almost 90% of these docs are just done for the sake of QA audit :) .. What do you think are the most important documents for a project? What documents can be used in the long run by another developer?
Please share your good practices here. I would like to use them for my own projects or the company I am planning to start in the long run.
Thanks
The key document is a good functional spec. There should be one and only one reference document for a system.
Overdoing documentation proliferates a large number of small requirements and spec documents every time someone changes a system or interface. For a system of any complexity, before long you have your spec distributed around several hundred assorted word, excel, visio and even powerpoint files. When this happens you lose clarity about what is current or even whether you have located and identified all pertinent documentation.
The BRD-SRD-Tech spec progression is based on an assumption that the business signs off the BRD, a business analyst signs off the SRD against requirements documented in the BRD and the technical specification is signed off against the SRD. This generates a web of sign-offs, multiple documents with redundant information and makes it difficult and clumsy to keep the spec documents up to date.
Because of this, subsequent requirements documentatation tends to take the form of a series of change request and supplemental requirement and spec docs, each with their own sign-off and audit process. You gain CYA and audit trail (or at least the appearance of an audit trail), but you lose clarity. There is now no definitive reference document for the system and it is difficult to establish what is current or relevant to any particular activity. The net result is that your business analysis process gets bogged down in forensic research, which adds overheads and latency to delivery schedules.
A spec document should be built in such a way that there is one definitive reference for any given system or subsystem. The document should be kept up to date and versioned. Get a good technical documentation tool like Framemaker, so your process can scale, and the document has some structural integrity of the sort lacking on Word.
For me the only real document I ever use is a spec. The more detail the better. However it doesnt need to be all completed at one time, and it doesnt need to be particularly formal. What is far more useful to me than documents that are checked and signed and double checked and double signed is always being able to get the latest version of a document. And being able to talk to people about what they have written, and get a decision in the case of any ambiguity. this is far more useful to me than anything else.
To sum up: a spec is the only document I have ever found useful, however it pales in comparison to having a project manager who knows the proposed system inside out, and can make sensible decisions based on what they know.
Documentation is like tofu -- most people hate it until they realize that under the right conditions, it can be really good.
The problem is that what you consider documentation is mostly made for documentation's sake. You, as a developer, don't see any immediate value in the documents you produce because you know you can do your job without all the TPS reports which you're required to make.
Unfortunately, I'm going to wager that there's not a lot you can do about in a company where you're being forced to eat raw tofu all the time. You'll probably just have to suck it up and write the docs which your company requires, but you can at least do one thing... you can write documents which at least are useful to you, and you can pass them along with your code for others who will maintain it.
Aside from inline documentation, you could set up a wiki to be used by yourself and people on your team. This type of documentation is searchable, which is already a big plus to developers, plus it's more of a living document instead of a homework-like paper you had to write. You already post to SO, so just think of your documentation as pooling your knowledge in a more useful place.
What do you think are the most important documents for a project?
Different people have different needs: for example the documents which the owner needs (e.g. the business contract) aren't the same as the documents which QA needs.
What documents can be used in the long run by another developer?
IMO the most important document (except for the source code) is the functional specification: because what the software is supposed to do (as opposed to, what it is doing) is the one thing that can't necessarily be reverse-engineered. See also How does a good developer keep from creating code with a low bus hit factor?
User Stories, burndown chart, code
I'm a fan of the old 4+1 views:
Use Case view (a/k/a user stories). There are several forms: proper use cases, forward-looking use cases that aren't as well defined and epics which need to be decomposed.
Logical view. The "static" view. UML Class diagrams and the like work well here as a design document. This also includes request and response formats for various protocols. Here is where we document the RESTful requests and responses. This includes the REST URI design.
Process view. The "dynamic" view. UML activity diagrams, sequence diagrams and statecharts and the like for here for design documents. In some cases, simple narratives work well. In other cases, there's a State design pattern, and it requires a combination of class diagrams and statecharts to show how the stateful objects interact.
This also includes protocols (e.g. REST). Here is where we define any special processing for the various REST requests.
This also includes an authentication or authorization rules, and any other cross-cutting aspects like security, logging, etc.
Component view. The pieces we're building for deployment. This includes the stuff we depend on, the structure of the modules and packages, etc. This is often a simple component diagram or a list of components and their dependencies.
Deployment view. We try to generate this from the code as deployed. Since we're using Python, we use epydoc to create the API documentation. We also use Sphinx to import module documentation into this view of the software.
This also includes the parameters, settings, and configuration details.
This, however, isn't sufficient.
When projects start, you have to work up to this through a series of sprints.
The first sprints build just the use case view.
Subsequent sprints build an "architecture" to implement the use cases. The architecture document has 4+1 views, but at a high level of abstraction. It summarizes the structure of the model schemas, the requests and replies, the RESTful processing, other processing, the expected componentry, etc. It never has a Deployment view. We generally reference operator guide and API documents as the deployment view of an architecture.
Then design-and-construction sprints build (and update) detailed 4+1 view documents for various components.
Then release sprints build (and update) the deployment views.
From the project point of view, the most important documents are those that normally include the word Plan, such as the Project Plan, Configuration Management Plan, Quality Plan, etc.
What you are describing is common in process improvements, and normally responds to two major causes. One is that the system really is overeaching and getting in the way of real work being done. Another is actually answered in your question: it is not that the documents are only done for the sake of audits, and your focus should not just be how usefull is the doc for other developers, but for the project or the company as a whole.
One usually looks at things from it's own perspective, sometimes it's necessary to look at the general picture.