Using interception to implement caching - how to define keys? - caching

TL;DR Can someone point me to a through implementation of a caching system that is added to the solution through interception?
I'm refactoring one of my solutions so that cross-cutting concerns are implemented through Unity Intercept. I've read the guides from MSFT, and now I think I can very easily implement the interception behaviors.
However, I was wondering about caching; I want to consistently use the cache regions and keys throughout the solution. Furthermore, I have key-specif configurations for expiration on my caching system.
On one example in the Unity's Developer Guide, it checks the method name -- this is a bad approach since it would mean altering the implementation everytime a new class/method must use cache (obviously).
I'm having this (mad) idea of implementing a configurable Interceptor that learns how to compose the region and key from the given parameters, and is configurable for each class(type)/method. However this would push a lot of responsibility to configuration; I don't like the feeling that I'm programming in the *.config file.
As you can see, I'm a tad bit lost on how to go about this. I don't like singletons and right now the caching system is a singleton, accessed everywhere by the solution. Can someone link me to a good documentation on how I should proceed about this? Is it possible to add cache and have proper keys/regions defined on the cache?

Quick search on the similar matter lead me to the "Attribute Based Cache using Unity Interception" project on CodePlex. Entire project looks to be abandoned in some Alpha stage, however, it should provide you with the baseline to start with.

Related

Would it be valid to use external libraries for standardized protocols (MIME) as a part of the domain model?

I am currently developing an application that parses and manipulates MIME messages wherein these messages are a central part of the domain model. Although I have already implemented the required functionality, for the moment, for parsing these messages, it seems unnecessary trying to reinvent the wheel would I need to add additional MIME features in the future. I could simply use an available library such as MimeKit which probably does the job much more efficiently and seems like the more robust way to go with. At the same time I feel hesitant to this idea for a couple of reasons:
I am fairly new to software architecture but what from what I've gathered online the consensus seems to be that the domain objects should not have any external dependencies since they model a domain that is specific to the business. And so if the business rules change it wouldn't be a good idea to have your domain model be dependent of an external library. However, since MIME is a standardized protocol this shouldn't be a problem, but that leads to the second point.
Although MIME is a standardized protocol, it has come to my knowledge that the clients from which my application receives these messages does not always fully conform to the RFC specifications. I have yet to come across a problem regarding the MIME format of the messages but with that in mind I feel as though there's no guarantee that I won't stumble across problems down the line.
I might have to add additional custom functionality regarding the parsing of the messages. This could however be solved by adding that functionality on top of the imported classes.
So my questions are:
Would it under normal circumstances be a valid alternative to use an external library for standardized protocols as a part of the domain model? It doesn't seem right to sully my domain- and application-layer with external dependencies.
How should I go about this problem with regards to my circumstances? Should I create an interface for the domain model so that I can swap it out with another implementation if needed in the future? This would require isolating the external dependencies in a class and mapping all the data to fit the contracts for the application layer which almost seems like more work than implementing the protocol myself. Or should I just implement it myself and add new features successively just to make sure that I have full control of the domain model?
I would highly appreciate your input.
Your entire question boils down to the following flawed thinking:
I am fairly new to software architecture but what from what I've gathered online the consensus seems to be...
Why let consensus make your decisions for you?
Who are these people who make up this "consensus"?
How do you know they have any idea what they are talking about?
Trusting the consensus of unknown sources seems like a terrible way to make decisions for your project.
Do you want to write software that solves real problems? Or do you want to get lost in the weeds of idealism and have your project fail before it even gets out of the design phase?
Do what makes sense for you.

Best practices for using large list of claims in Web API/OWIN

I'm trying to implement a claims-based authorization setup using Web API/OWIN/OAuth and I'm trying to find out the best way to manage a more fine-grained type of authorization.
I don't like the idea of using just roles, as there needs to be lots of fine-grained authorization in the application I'm working on. I was thinking a roles+permissions approach made more sense (where a role simply maps to a subset of permissions). The permissions would be based on an action+resource pair (e.g. CanViewEmployee, CanEditEmployee, etc.). Nothing out of the ordinary here.
However, I'm wondering how this should be implemented using OWIN/OAuth, possibly using Thinktecture IdentityServer. I am trying to avoid hard coding the permissions in the custom AuthorizationManager I have as they need to be easily changed without a rebuild. I know it is an option to put these as policies in the web.config (mapping a resource+action to a claim type and value), but if we are talking about dozens, maybe even hundreds of permissions, this seems like it could get out of hand pretty quickly as well.
I guess the third option would be to drive it all from the database, but managing it from there would also need some kind of front-end to do so, which is more effort than just changing a config/XML file.
Am I missing some other options/best practices here when it comes to large numbers of claims/permissions, or perhaps some other utility or package I could use to help manage this when the numbers get out of hand?
Decoupling these permissions into a separate authorization manager class is a good first step. In that code you could then hard code the rules for your permissions (such as "Admin" role can do action X, Y and Z, but only "Manager" role can do X or Y). But you can also have the code in the authorization manager perform dynamic look ups to check permissions that have been set in in a database (for example).
The additional benefit of doing this in code is that you can unit test it all to prove your permissions logic is implemented correctly (and thus will be properly enforced at runtime). This will be useful if your permissions do change frequently.
Also, the decoupling will help if you need to redeploy frequently (because of the frequent changes), since the code can be isolated into its own assembly.

Infinispan JPA Cache loader?

How do I implement Infinispan JPA cache loader?is there any pattern or way to implement it in infinispan API?
Most existing CacheLoader implementations in Infinispan are assuming the data just needs storage and consider it blindly as an array of bytes. The integration API in Infinispan doesn't expose much of a context other than "store(Key,Value)" or "load(Key)". I'm oversimplifying a bit, but that's the core.
There is one exception which is the LuceneCacheLoader. This was designed to work exclusively in combination with the Lucene Directory for Infinispan, as it takes advantage of the fact
It knows which types to expect
Takes advantage of the known needs of the Directory (such as access pattern)
Have a look at the sources to get inspired; note I only implemented loading (it's a CacheLoader).
If you control both the application using Infinispan and the CacheLoader, you could take advantage of these details as well.
Tricky aspects:
While writing multiple keys even in the same transaction, you'll have access to one entry at a time in the scope of the CacheLoader logic -> hard to map relations: have to deal with one entity at a time and "restore connections"
With write behind you might receive entries out of order -> not sure how to deal with referential integrity
With write behind you're not going to have the same Transactional context -> might be acceptable?
Taking these into account, I'm sure you could write one. How easy? That depends on your app.
I'm not sure if a general purpose solution could work. If you find out it can, please contribute it as it would be a great addition to the project.

Organizing application in layers

I’m developing a part of an application, named A. The application I want to plug my DLL into, called application B is in vb 6, and my code is in vb.net. (Application B will in time be converted to vb.net) My main question i, how is the best way for me to organize my code (application A)?
I want to split application A into layers (Service, Business, Data access), so it will be easy to integrate application A into B when B is converted to vb.net. I also want to learn about all the topics like layered architecture, patterns, inversion of dependency, entity framework and so on. Although my application (A) is small I want to organize my code in the best way.
The application I’m working with (A) is using web services for authenticating users and for sending schema to an organization. The user of application B is selecting a menu point in application B and then some functions in my application A is called.
In application A I have an auto generated schema class from an xsd schema. I fill this schema object with data and serialize the object to a memory string (is it a good solution to use memory string, I don’t have to save the data yet), wrap the xml inside a CDATA block and return the CDATA block as a string and assign the CDATA block to a string property of a web service.
I am also using Entity framework for database communication (to learn how this is done for the future work with application B). I have two entities in my .edmx, User and Payer.
I also want to use the repository pattern (is this a good choice?) to make a façade between the DAL and the BLL.
My application has functions for GeneratingSchema (filling the schema object with data), GetSchemaContent, GetSchemaInformation, GenerateCDATABlock, WriteToTextFile, MemoryStreamToString, EncryptData and some functions that uses web services, like SendShema, AuthenticateUser, GetAvalibelServises and so on.
I’m not sure where I should put it all?
I think I have to have some Interfaces like IRepository, ISchema (contract for the auto generated schema class, how can I do this?) ICryptoManager, IFileManager and so on, and classes that implements the interfaces.
My DAL will be the Entity framework. And I want a repository façade in my BLL (IRepository, UserRepository, PayerRepository) and classes for management (like the classes I have mention above) holding functions like WriteToFile, EncryptData …..
Is this a good solution (do I need a service layer, all my GUI is in application B) and how can I organize my layers, interfaces, classes an functions in Visual Studio?
Thanks in advance.
This is one heck of a question, thought I might try to chip away at a few parts for you so there's less for the next guy to answer...
For application B (VB6) to call application/assemblies A, I'm going to assume you're exposing the relevant parts of App A as COM Components, using ComVisibleAttributes and similar, much like described in this artcle. I only know of one other way (WCF over COM) but I've never tried it myself.
Splitting your solution(s) into various tiers and layers is a very subjective/debatable topic, and will always come down to a combination of personal preference, business requirements, time available, etc. However, regardless of the depth of your tiers and layers, it is good to understand the how and the why.
To get you started, here's a couple articles:
Wikipedia's general overview on "Multitier Architectures"
MSDN's very own "Building an N-Tier Application in .Net"
Inversion of Control is also a very good pattern to get into right now, with ever increasing (and brilliant!) resources becoming available to the .Net platform, it's definitely worth infesting some time to learn.
Although I haven't explored the full extent of IoC, I do love dependency injection(a type of IoC if I understand correctly though people seem to muddle the IoC/DI terms quite a lot). My personal preference for DI right now is the open source Ninject project, which has plenty of resources online and a reasonable wiki section talking you through the various aspects.
There are many more takes on DI and IoC, so I don't want to even attempt to provide you a comprehensive list for fear of being flamed for missing out somebody's favourite. Just have a search, see which you like the look of and have a play with it. Make sure to try a couple if you have the time.
Again, the Repository Pattern - often complemented well by the Unit of Work Pattern are also great topics to mull over for hours. I've seen a lot of good examples out on the inter-webs, and as many bad examples. My only advice here is to try it for yourself... see what works for you, develop a version of the patterns that suits you best and try to keep things consistent for maintainability.
For organising all these tiers and layers in VS, I recommend trying to keep all your independent tiers/layers in their own Solution Folders (r-click the Solution, Add New Solution Folder), or in some cases (larger projects) there own solutions and preferably an automated build service to update dependent projects with up to date assemblies as required. Again, a broad subject and totally down to personal preference. Just keep an eye out when designing your application for potential upcoming Circular References.
So, I'm afraid that doesn't even slightly answer your question, but hopefully provides you with some resources to check out and a few hours of reading.
Good luck!

Can/Should I disable the cache expiry when backing data store is unavailable?

I'm just started out with Ehcache, and it seems pretty good so far. I'm using it in a simplistic fashion to speed up reads against a database, but I wonder whether I can also use it to let the application stay up if the database is unavailable for short periods. (Update - my context is a application with high-availability modules that only read from the database)
It seems like I could do that by disabling expiry in the event of a database read problem, and re-enabling it when a read works again.
What do you think? Is that a reasonable approach or have I missed something? If it's a fair approach, any tips for how best to implement appreciated.
Update - ehcache supports a dynamically configurable option to un/set the cache to 'eternal'. This seems to do what I need.
Interesting question - usually, the answer would be "it depends".
Firstly, if you have database reliability problems, I'd invest time and energy in fixing them, rather than applying a bandaid solution.
Secondly, most applications need both reading and writing to work - it doesn't seem to make sense to keep your app up for reads only.
However, if your app has a genuine "read only" function, and there's a known and controlled reason for database down time (e.g. backups), then yes, you can use your cache to keep the application up and running while the database is down. I would do this by extending the cache periods, rather than trying to code specific edge cases. For instance, you might have a background process which checks whether the database is available and swaps in a different configuration file when there's trouble.

Resources