While trying to understand how an existing system will map to FHIR resources, I am stuck in the documentation on Treatment/Care Preferences like the ones outlined here: http://wiki.hl7.org/index.php?title=Care_Preference
Would these preferences be handled in a list of extended objects? Or will FHIR be implementing a CarePreference resource?
This isn't catered for in the current set of resources. I guess you use Other (http://hl7.org/implement/standards/fhir/other.htm). It does seem like the kind of thing we'd want to define a resource for, but I'm not aware of any plans for one right now. I forwarded the suggestion along to the appropriate team.
btw, I'm not sure this question meets Stack Overflows guidelines - it might get edited/closed.
"Other" is the solution for now. Speed of the development of a specific resource is likely to be dependent on the number asking for it and the detail of the use-cases they supply. Consider sharing these on the FHIR list server. Alerts might be another mechanism to flag important preferences.
Related
FHIR Conformance Layer include StructureDefinition resource, and I'm trying to understand if it is mandatory to provide anything there, when my server does not have any custom resources?
We are going to support multiple Implementation Guides (e.g. US Core and CarinBB), which have their own profiles and extensions. But all their StructureDefinitions are already definied on hl7.org and I can have links to those profiles from my CapabilityStatement and instances. So do I need to expose those structure definitions on my server?
Or it should be just empty, since I don't have anything custom?
Your CapabiltyStatement should declare a StructureDefinition for each resource you support that indicates what your actual system capabilities are - I.e. what data you can actually consume or produce. Typically this will involve a combination of the expectations of a variety of profiles as well as some additional stuff. You may have limits on repetitions, you might not support certain optional elements from some profiles, and might support some additional elements or extensions that none of the profiles expect support for. Very few implementations will have internal support that exactly matches an official published profile. However, if you do, you could technically point to that official profile rather than creating one of your own.
I am currently developing an application that parses and manipulates MIME messages wherein these messages are a central part of the domain model. Although I have already implemented the required functionality, for the moment, for parsing these messages, it seems unnecessary trying to reinvent the wheel would I need to add additional MIME features in the future. I could simply use an available library such as MimeKit which probably does the job much more efficiently and seems like the more robust way to go with. At the same time I feel hesitant to this idea for a couple of reasons:
I am fairly new to software architecture but what from what I've gathered online the consensus seems to be that the domain objects should not have any external dependencies since they model a domain that is specific to the business. And so if the business rules change it wouldn't be a good idea to have your domain model be dependent of an external library. However, since MIME is a standardized protocol this shouldn't be a problem, but that leads to the second point.
Although MIME is a standardized protocol, it has come to my knowledge that the clients from which my application receives these messages does not always fully conform to the RFC specifications. I have yet to come across a problem regarding the MIME format of the messages but with that in mind I feel as though there's no guarantee that I won't stumble across problems down the line.
I might have to add additional custom functionality regarding the parsing of the messages. This could however be solved by adding that functionality on top of the imported classes.
So my questions are:
Would it under normal circumstances be a valid alternative to use an external library for standardized protocols as a part of the domain model? It doesn't seem right to sully my domain- and application-layer with external dependencies.
How should I go about this problem with regards to my circumstances? Should I create an interface for the domain model so that I can swap it out with another implementation if needed in the future? This would require isolating the external dependencies in a class and mapping all the data to fit the contracts for the application layer which almost seems like more work than implementing the protocol myself. Or should I just implement it myself and add new features successively just to make sure that I have full control of the domain model?
I would highly appreciate your input.
Your entire question boils down to the following flawed thinking:
I am fairly new to software architecture but what from what I've gathered online the consensus seems to be...
Why let consensus make your decisions for you?
Who are these people who make up this "consensus"?
How do you know they have any idea what they are talking about?
Trusting the consensus of unknown sources seems like a terrible way to make decisions for your project.
Do you want to write software that solves real problems? Or do you want to get lost in the weeds of idealism and have your project fail before it even gets out of the design phase?
Do what makes sense for you.
I'm currently evaluating a few different issue management tools, and have it narrowed down to TargetProcess, Redmine and Youtrack. For what I need TargetProcess seems to do everything with a lot less need for customisation, however as the only person working on QA at a small startup, I'm trying to make sure that as much of the process is automated as possible.
YouTrack has a workflow editor which allows you to write validation rules for your issues, and would therefore allow me to specify that you can't move an issue of a certain type into a certain state without having a related issue of another type, for example you cannot move a feature out of "New" without having a set of related requirements in the form of test cases.
While this isn't as ingrained in Redmine, there is a plugin which allows you to write these types of rules. I haven't however been able to find anything of the sort for TargetProcess, and worry that the ability to perform this sort of deep customisation will add an extra time-sink as I have to spend more time on this process myself.
Is there any way to achieve this in TargetProcess, be it using a plugin or an external service? I can see that I could hook something up to the REST api, but this would make it difficult to give feedback as to why an issue had not been progressed. TargetProcess is an impressive tool, however it is very expensive, and unless it does everything I want, it is difficult to justify the outlay.
TL/DR
Is there a mechanism for writing business rules into TargetProcess such that the proper QA process is enforced, so I can concentrate on providing value through QA rather than process management?
There are no customized Business Rules in Targetprocess so far. The only thing that exist is a Mashup that allows some rules customization related to custom fields
https://github.com/TargetProcess/TP3MashupLibrary/tree/master/Custom%20Field%20Constraints
Custom Business Rules are requested by many people and we are going to start development this year.
TL;DR Can someone point me to a through implementation of a caching system that is added to the solution through interception?
I'm refactoring one of my solutions so that cross-cutting concerns are implemented through Unity Intercept. I've read the guides from MSFT, and now I think I can very easily implement the interception behaviors.
However, I was wondering about caching; I want to consistently use the cache regions and keys throughout the solution. Furthermore, I have key-specif configurations for expiration on my caching system.
On one example in the Unity's Developer Guide, it checks the method name -- this is a bad approach since it would mean altering the implementation everytime a new class/method must use cache (obviously).
I'm having this (mad) idea of implementing a configurable Interceptor that learns how to compose the region and key from the given parameters, and is configurable for each class(type)/method. However this would push a lot of responsibility to configuration; I don't like the feeling that I'm programming in the *.config file.
As you can see, I'm a tad bit lost on how to go about this. I don't like singletons and right now the caching system is a singleton, accessed everywhere by the solution. Can someone link me to a good documentation on how I should proceed about this? Is it possible to add cache and have proper keys/regions defined on the cache?
Quick search on the similar matter lead me to the "Attribute Based Cache using Unity Interception" project on CodePlex. Entire project looks to be abandoned in some Alpha stage, however, it should provide you with the baseline to start with.
before I start I realise there are a few SNMP related questions here already but not many seem to have been answered - that could mean I'm asking in the wrong place but I don't know where else to go at the moment.
I've been reading up as best I can on SNMP for a couple of days but am finding it difficult to get my head around what is meant to be happening. The idea is eventually we will integrate SNMP into our Java application server which will allow the end users to incorporate it into their pre-existing Network Management Systems(NMS).
Unfortunately I'm feeling entirely confused by what is meant to be going on. From what I understood from talking to the end users (which was unfortunately before any research) was that the monitoring allows their existing NMS to give their admin guys a view of the vital statistics in a tree type display, giving them feedback regarding different parts of the system at a high level and allowing them to dig down into specific subsystems.
From reading around we would implement an 'Agent' which has several defined interfaces allowing for GET requests etc to be processed and responded to. That makes sense but I am at a loss to work out what the format of the communication is - there don't seem to be any specific examples of what any of the messages look like, how the information is encoded.
More of my confusion though is regarding Management Information Base(MIB). I had, wrongly, assumed that the interface of the agent would allow for the monitored attributes to be requested and then in turn the values for those attributes requested. Allowing any new Agent to be started and detected without any configuration on the NMS end (with the exception of authentication in v3). This, if I understand correctly, is not the case and the Agent must instead define MIBs which can be used by the NMS to determine those attributes. My confusion is increased when people start referring to thousands of existing MIBs and that they can be reused which I don't understand. Is the intention that a single MIB definition can be used to say describe how a particular attribute of a network device (something simple like internet connected on a router:yes/no) for many different devices? If so I don't believe that our software would allow the monitoring of anything common to any other device/system but should we be looking for already exising MIBs? At the moment I don't really see any good rational for such a system, surely it would be easier for the Agent to export that information - so I'd appreciate it if someone could enlighten me!
I think it would help if I was able to setup a simple SNMP agent and some sort of client, I could begin to see the process and eventually inspect the communication between the two but am finding it difficult to find anywhere that provides any information on doing such a thing. Nagios has been recommended to us as a test 'client'/NMS but their 'get started quick' section recommends downloading a 600Mb virtual machine - surely there is a quicker way to get started?
Any help or suggestions will be appreciated, I have been through the Wiki page but it doesn't seem to go into much detail about the MIBs and the having not had to deal with anything like the referenced RFCs before, while they may contain all of the information they seem completely impenetrable to me at the moment. Or if there are any books that can be recommended for an overview and implementation of v3?
Thanks for reading and even more thanks if you think you can help!
It seems to me that you read all SNMP information piece by piece in an disorganized way. This is highly not recommended and of course lead you to confusion.
What about forgetting what you have learnt so far and dive into a good book such as Essential SNMP?
http://shop.oreilly.com/product/9780596008406.do
Click the Google Preview icon to preview it please.
You could not depend on a network forum to tell you the ABCs, as that's impractical I find out.
The communications interface is SNMP. That's the protocol used for transmission (usually on top of UDP). The thing that services information requests is an SNMP Agent. The thing that sends information requests is an SNMP Manager.
The definition of what information should be made available by the Agent, and requested by the Manager, goes in a MIB. A MIB is the "glue", a directory of what sort of things any particular system can/should offer. It maps numeric codes to names and types that allow us to make sense of the data, much like how a phone directory maps phone numbers to people's names and addresses.
Generally you would create and ship and use your own MIBs that can describe aspects specific to your own product, but you are supposed to service some standard information requests as well, which are defined in existing MIBs. Yes there are thousands of other pre-existing MIBs and the likelihood that you need more than one or two of these is remote. They are typically published versions of MIBs for existing products.
The conventional way to "toy around" is to install Net-SNMP (a software suite that includes an agent implementation and allows you to "bolt on" your own logic and your own MIBs fairly easily) then examine the results using a packet capturer like Wireshark.
For a fuller implementation in production you may stick with Net-SNMP, or write your own Agent software, or do what I did and create a hybrid of the two that's a little more flexible and performant but uses Net-SNMP's backend for handling all the low-level SNMP stuff.
Your first step, though, is to read a book or some other teaching material that can clear all your misconceptions, because guesswork won't cut it.
I had success using the samples from this page. Both the shell and Perl NetSNMP code was very straightforward to implement and query.