Would it be valid to use external libraries for standardized protocols (MIME) as a part of the domain model? - mime

I am currently developing an application that parses and manipulates MIME messages wherein these messages are a central part of the domain model. Although I have already implemented the required functionality, for the moment, for parsing these messages, it seems unnecessary trying to reinvent the wheel would I need to add additional MIME features in the future. I could simply use an available library such as MimeKit which probably does the job much more efficiently and seems like the more robust way to go with. At the same time I feel hesitant to this idea for a couple of reasons:
I am fairly new to software architecture but what from what I've gathered online the consensus seems to be that the domain objects should not have any external dependencies since they model a domain that is specific to the business. And so if the business rules change it wouldn't be a good idea to have your domain model be dependent of an external library. However, since MIME is a standardized protocol this shouldn't be a problem, but that leads to the second point.
Although MIME is a standardized protocol, it has come to my knowledge that the clients from which my application receives these messages does not always fully conform to the RFC specifications. I have yet to come across a problem regarding the MIME format of the messages but with that in mind I feel as though there's no guarantee that I won't stumble across problems down the line.
I might have to add additional custom functionality regarding the parsing of the messages. This could however be solved by adding that functionality on top of the imported classes.
So my questions are:
Would it under normal circumstances be a valid alternative to use an external library for standardized protocols as a part of the domain model? It doesn't seem right to sully my domain- and application-layer with external dependencies.
How should I go about this problem with regards to my circumstances? Should I create an interface for the domain model so that I can swap it out with another implementation if needed in the future? This would require isolating the external dependencies in a class and mapping all the data to fit the contracts for the application layer which almost seems like more work than implementing the protocol myself. Or should I just implement it myself and add new features successively just to make sure that I have full control of the domain model?
I would highly appreciate your input.

Your entire question boils down to the following flawed thinking:
I am fairly new to software architecture but what from what I've gathered online the consensus seems to be...
Why let consensus make your decisions for you?
Who are these people who make up this "consensus"?
How do you know they have any idea what they are talking about?
Trusting the consensus of unknown sources seems like a terrible way to make decisions for your project.
Do you want to write software that solves real problems? Or do you want to get lost in the weeds of idealism and have your project fail before it even gets out of the design phase?
Do what makes sense for you.

Related

Simplest C++ library that supports distributed messaging - Observer Pattern

I need to do something relatively simple, and I don't really want to install a MOM like RabittMQ etc.
There are several programs that "register" with a central
"service" server through TCP. The only function of the server is to
call back all the registered clients when they all in turn say
"DONE". So it is a kind of "join" (edit: Barrier) for distributed client processes.
When all clients say "DONE" (they can be done at totally different times), the central server messages
them all saying "ALL-COMPLETE". The clients "block" until asynchronously called back.
So this is a kind of distributed asynchronous Observer Pattern. The server has to keep track of where the clients are somehow. It is ok for the client to pass its IP address to the server etc. It is constructable with things like Boost::Signal, BOOST::Asio, BOOST::Dataflow etc, but I don't want to reinvent the wheel if something simple already exists. I got very close with ZeroMQ, but non of their patterns support this use-case very well, AFAIK.
Is there a very simple system that does this? Notice that the server can be written in any language. I just need C++ bindings for the clients.
After much searching, I used this library
https://github.com/actor-framework
It turns out that doing this with this framework is relatively straightforward. The only real "impediment" to using it is that the library seems to have gotten an API transition recently and the documentation .pdf file has not completely caught up with the source. No biggie since the example programs and the source (.hpp) files get you over this hump. However, they need to bring the docs in sync with the source. In addition, IMO they need to provide more interesting examples on how to use c++ Actors for extreme performance. For my case it is not needed, but the idea of actors (shared nothing) in this use-case is one of the reasons people use it instead shared memory communication when using threads.
Also, getting used to the syntax that the library enforces (get used to lambdas!) if one is not used to state of the art c++11 programs it can be a bit of a mind-twister at first. Then, the triviality of remembering all the clients that registered with the server was the only other caveat.
STRONGLY RECOMMENDED.

Enforcing relational workflows in TargetProcess

I'm currently evaluating a few different issue management tools, and have it narrowed down to TargetProcess, Redmine and Youtrack. For what I need TargetProcess seems to do everything with a lot less need for customisation, however as the only person working on QA at a small startup, I'm trying to make sure that as much of the process is automated as possible.
YouTrack has a workflow editor which allows you to write validation rules for your issues, and would therefore allow me to specify that you can't move an issue of a certain type into a certain state without having a related issue of another type, for example you cannot move a feature out of "New" without having a set of related requirements in the form of test cases.
While this isn't as ingrained in Redmine, there is a plugin which allows you to write these types of rules. I haven't however been able to find anything of the sort for TargetProcess, and worry that the ability to perform this sort of deep customisation will add an extra time-sink as I have to spend more time on this process myself.
Is there any way to achieve this in TargetProcess, be it using a plugin or an external service? I can see that I could hook something up to the REST api, but this would make it difficult to give feedback as to why an issue had not been progressed. TargetProcess is an impressive tool, however it is very expensive, and unless it does everything I want, it is difficult to justify the outlay.
TL/DR
Is there a mechanism for writing business rules into TargetProcess such that the proper QA process is enforced, so I can concentrate on providing value through QA rather than process management?
There are no customized Business Rules in Targetprocess so far. The only thing that exist is a Mashup that allows some rules customization related to custom fields
https://github.com/TargetProcess/TP3MashupLibrary/tree/master/Custom%20Field%20Constraints
Custom Business Rules are requested by many people and we are going to start development this year.

Where to begin with SNMP agent implementation?

before I start I realise there are a few SNMP related questions here already but not many seem to have been answered - that could mean I'm asking in the wrong place but I don't know where else to go at the moment.
I've been reading up as best I can on SNMP for a couple of days but am finding it difficult to get my head around what is meant to be happening. The idea is eventually we will integrate SNMP into our Java application server which will allow the end users to incorporate it into their pre-existing Network Management Systems(NMS).
Unfortunately I'm feeling entirely confused by what is meant to be going on. From what I understood from talking to the end users (which was unfortunately before any research) was that the monitoring allows their existing NMS to give their admin guys a view of the vital statistics in a tree type display, giving them feedback regarding different parts of the system at a high level and allowing them to dig down into specific subsystems.
From reading around we would implement an 'Agent' which has several defined interfaces allowing for GET requests etc to be processed and responded to. That makes sense but I am at a loss to work out what the format of the communication is - there don't seem to be any specific examples of what any of the messages look like, how the information is encoded.
More of my confusion though is regarding Management Information Base(MIB). I had, wrongly, assumed that the interface of the agent would allow for the monitored attributes to be requested and then in turn the values for those attributes requested. Allowing any new Agent to be started and detected without any configuration on the NMS end (with the exception of authentication in v3). This, if I understand correctly, is not the case and the Agent must instead define MIBs which can be used by the NMS to determine those attributes. My confusion is increased when people start referring to thousands of existing MIBs and that they can be reused which I don't understand. Is the intention that a single MIB definition can be used to say describe how a particular attribute of a network device (something simple like internet connected on a router:yes/no) for many different devices? If so I don't believe that our software would allow the monitoring of anything common to any other device/system but should we be looking for already exising MIBs? At the moment I don't really see any good rational for such a system, surely it would be easier for the Agent to export that information - so I'd appreciate it if someone could enlighten me!
I think it would help if I was able to setup a simple SNMP agent and some sort of client, I could begin to see the process and eventually inspect the communication between the two but am finding it difficult to find anywhere that provides any information on doing such a thing. Nagios has been recommended to us as a test 'client'/NMS but their 'get started quick' section recommends downloading a 600Mb virtual machine - surely there is a quicker way to get started?
Any help or suggestions will be appreciated, I have been through the Wiki page but it doesn't seem to go into much detail about the MIBs and the having not had to deal with anything like the referenced RFCs before, while they may contain all of the information they seem completely impenetrable to me at the moment. Or if there are any books that can be recommended for an overview and implementation of v3?
Thanks for reading and even more thanks if you think you can help!
It seems to me that you read all SNMP information piece by piece in an disorganized way. This is highly not recommended and of course lead you to confusion.
What about forgetting what you have learnt so far and dive into a good book such as Essential SNMP?
http://shop.oreilly.com/product/9780596008406.do
Click the Google Preview icon to preview it please.
You could not depend on a network forum to tell you the ABCs, as that's impractical I find out.
The communications interface is SNMP. That's the protocol used for transmission (usually on top of UDP). The thing that services information requests is an SNMP Agent. The thing that sends information requests is an SNMP Manager.
The definition of what information should be made available by the Agent, and requested by the Manager, goes in a MIB. A MIB is the "glue", a directory of what sort of things any particular system can/should offer. It maps numeric codes to names and types that allow us to make sense of the data, much like how a phone directory maps phone numbers to people's names and addresses.
Generally you would create and ship and use your own MIBs that can describe aspects specific to your own product, but you are supposed to service some standard information requests as well, which are defined in existing MIBs. Yes there are thousands of other pre-existing MIBs and the likelihood that you need more than one or two of these is remote. They are typically published versions of MIBs for existing products.
The conventional way to "toy around" is to install Net-SNMP (a software suite that includes an agent implementation and allows you to "bolt on" your own logic and your own MIBs fairly easily) then examine the results using a packet capturer like Wireshark.
For a fuller implementation in production you may stick with Net-SNMP, or write your own Agent software, or do what I did and create a hybrid of the two that's a little more flexible and performant but uses Net-SNMP's backend for handling all the low-level SNMP stuff.
Your first step, though, is to read a book or some other teaching material that can clear all your misconceptions, because guesswork won't cut it.
I had success using the samples from this page. Both the shell and Perl NetSNMP code was very straightforward to implement and query.

MVC3 / VoiceXML Best Practices

All,
I'm currently revamping an ancient IVR written using Classic ASP with VXML 2.0. Believe me, it was a mess, largely due to the mixing of routing logic between the ASP code and the VXML logic, featuring multiple postbacks a la ASP.NET. Not fun to debug.
So we're starting fresh with MVC 3 and Razor and so far so good. I've succeeded in moving pretty much all the processing logic to the controller and just letting most of the VXML be just voicing a prompt and waiting for a DTMF reply.
But, looking at a lot of sample VXML code, it's beginning to look like it might actually be simpler to do basic routing using multiple on a page and VXML's built-in DTMF processing and . More complex decision-making and database/server access would call the controller as it does now.
I'm torn between the desire to be strict about where the logic is, versus what might actually be simpler code. My VXML chops are not terribly advanced (I know enough to be dangerous), so I'm soliciting input. Have others used multiple forms on a page? Better or worse?
Thanks
Jim Stanley
Blackboard Connect Inc.
Choosing to use simple VoiceXML and moving the logic server side is a fairly common practice. Pros/Cons below.
Server-side logic
Often difficult to get retry counters to perform the way you want if you are also performing input validation (valid for grammar, but not for host or other validation logic)
Better programming language/toolkits for making logical descriptions (I'm not a fan of JavaScript, but even if you like JavaScript, you tend to have to create a lot of forms to get the flow control you want).
Usually easier to debug. Step through logical decisions and access to logging tools.
Usually easier to create reusable components that use parameters to alter component behavior.
Client side logic
Usually more scalable. VoiceXML browsers tend to use a large amount of their resources compiling and processing pages. One larger page will typically do better than a variety of smaller pages. However, platforms vary significantly and your size may make this negligible.
Better chance of using static pages. Many platforms have highly optimized caches (more than just fetched data). Like above may only matter if you have 100s of ports per device or 1000s of ports hitting a server.
Mixing and matching isn't bad until somebody requests some sort of global behavior change. You may be making the change in multiple places. Debugging techniques will also vary so it may complicate your support paths (e.g. looking in browser logs versus server logs to see what happened on a call).
Our current framework currently uses a mix of server and client. All our logic is in the VoiceXML, and the server is used for state saving and generating recognition components. Unfortunately as all our logic is in the voicexml, it makes it harder to unit test.
Rather than creating a large voicexml page that subdialogs to each question and all the routing done on the clientside, postback to the server after each collection, then work out where to go now. Obviously this has it's pros/cons as Jim pointed out, but the hope is to abstract some of the IVR/callflow from the VoiceXML and reduce the dependency on skilling up developers in VoiceXML.
I'm looking at redeveloping using MVC3, creating different views based on base IVR functions, which can then be modified based on the hosting VoiceXML platform:
Recognition
Prompts
Transfer
CTI Get/Set
Disconnect
What I'm still working out is how to create reusable components within the MVC. Whether to create something we subdialog to and return back the result (similar to how we currently do it), or redirect to a generic controller, and then redirect to the "Completed" action once the controller is done.
Jim Rush provides a pretty good overview of the pros and cons of server side versus client side logic and is pretty consistent with my discussion on this topic in my blog post "Client-side versus Server-side Development of VoiceXML Applications". I believe the pros of putting the logic on the server far outweigh putting it on the client. The VoiceXML User Group is moving towards removing most of this logic from VoiceXML in version 3.0 and suggesting using a new standard called State Chart XML (SCXML) to handle control of the voice application. I have started an open source project to make it easier to develop VoiceXML applications using ASP.NET MVC 3.0 which can be found on CodePlex and is called VoiceModel. There is an example application in this project which will demonstrate a method for keeping the logic server side, which I believe greatly improves reuse of voice objects.

How to design a command line program reusable for a future development of a GUI? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
What are some best practices to keep in mind when developing a script program that could be integrated with a GUI, probably by somebody else, in the future?
Possible scenario:
I develop a fancy python CLI program that scrapes every unicorn images from the web
I decide to publish it on github
A unicorn fan programmer decides to take the sources and build a GUI on them
he/she gives up because my code is not reusable
How to prevent the step four letting the unicorn fan programmer build his/her GUI without too much hassle?
You do it by applying a good portion of layering (maybe implementing the MVP pattern) and treating your CLI as a UI in it's own right.
UPDATE
This text from the wikipedia article about the Model-View-Presenter pattern explains it quite well.
Model-view-presenter (MVP) is a user
interface design pattern engineered to
facilitate automated unit testing and
improve the separation of concerns in
presentation logic.
The model is an interface defining the data to be displayed or
otherwise acted upon in the user
interface.
The view is an interface that displays data (the model) and routes
user commands (events) to the
presenter to act upon that data.
The presenter acts upon the model and the view. It retrieves data
from repositories (the model),
persists it, and formats it for
display in the view.
The main point being that you need to work on separation of concern in your application.
Your CLI would be one implementation of a view, whereas the unicorn fan would implement another view for a rich client. The unicorn fan, would base his view on the same presenters as your CLI. If those presenters are not sufficient for his rich client he could easily add more, because each presenter is based on data from the model. The model, in turn, is where all the core logic of your application is based. Designing a good model is an entire subject in itself. You may be interested in reading, for example, about Domain-Driven Design, even though I don't know how well it applies to your current application. But it's interesting reading anyway.
As you can see, the wikipedia article on MVP also talks about testability, which is also crucial if you want to provide a robust framework for others to build on. To reach a high level of testability in your code-base, it is often a good idea to use some kind of Dependency Injection framework.
I hope this gives you a general idea of the techniques you need to employ, although I understand that it may be a little overwhelming. Don't hesitate to ask if you have any further doubts.
/Klaus
This sounds like a question about how to write usable code.
When considering reusablility of code, generally speaking, one should try to:
separate functionality into modules
have a well-defined interface
Separating functionality into modules
One should try to separate code into parts that have a simple responsibility. For example, a program that goes out to the internet to scrape pictures of unicorns may be separated into sections that a) scrapes the web for images, b) determines if an image is a unicorn and c) stores the said unicorn images into some specified location.
Have a well-defined interface
Having a well-designed interface, an API (application programming interface), is going to be crucial to providing a way to reuse or extend an application.
Providing entry points into each functionality will allow other programmers to actually write a new user interface for the provided functionality.
The solution to this kind of problem is very simple, but in practice, a lot of junior programmers have trouble with this pattern. Here's the solution:
You design a unicorn-scraping API. This is the hard step; good API design is insanely hard, and there aren't many examples to study. One API that I think is worth studying is the one in Dave Hanson's book C Interfaces and Implementations.
Then you design your command-line interface. If the functionality you are exposing is not to complicated, this is not too hard. But if it's complicated, you may want to think seriously about managing your API using an embedded scripting language like Lua or Tcl and designing an interface for scripting rather than for the command line.
Finally you write your command-line processing code and glue everything together.
Your hypothetical successor builds his or her GUI in one of two ways: using the embedded scripting languages, or directly on top of your API.
As noted in other answers, model/view/controller may be a good pattern to use in designing your API.
You'll be taking input, executing an action, and presenting output. It might be a good idea to use a callback mechanism (such as event handlers, passing a method as a parameter, or passing this/self to the called class) to decouple the input and output methods from the execution of the action.
Aside from this, program to an interface, not to an implementation - the essence of MVC/MVP, as klausbyskov mentioned. e.g., Don't directly call file.write(); make myModel.saveMyData() which calls file.write, so someone else can make a somebodysModel.saveMyData() that writes to a database.

Resources