MVC3, Unity Framework with multiple configurations - asp.net-mvc-3

We have a multi-company capable site which requires unique business logic for each company. We are using constructor dependency injection in our controllers, but would need to swap the unity container being used based upon a user's company. I was thinking that you could examine the user's cookie before setting the container for the current HttpContext. Is this even possible?

It's very doable. What I'd do is set up a "master" container, and then a child container for each company. That way you have default configuration in one place, and then you can customer per company easily without having to reconfigure every time. Save the child containers in some easily indexed way (a dictionary of company -> container, perhaps).
Then, write an HttpModule implementation that runs early in the pipeline to figure out which company the request is for. Use that to figure out the appropriate container to use. And from there you're pretty much set.
I would be worried as a customer of your system that you're not isolating my data sufficiently; wouldn't want to leak information across customers and get sued.

Related

How to reduce staging tasks with CI in Kentico 13

I'm looking for advice on how to deal with Kentico's staging tasks as they relate to Kentico 13's continuous integration development model.
Here's our challenge:
Each developer has their own Kentico database, developing using Visual studio, git source control through Azure DevOps and the CI switch turned on in Kentico.
As the developer makes a change to a Kentico object, for example, adds a new property to a custom page type, the CI process in Kentico serializes the page type onto the file system for that developer. They create a pull request and the new XML file that represents the serialized page type is now in source control... along with basically every other Kentico object.
When the DevOps release process kicks in, our shared build server is updated through the CIRestore process with the new page type property. All good - everything working as expected at this point.
However, at some stage we need to get this new page type property from the shared build server into testing, and later production. And traditionally we'd using Kentico Staging to do this. The problem we're facing is that during the CIRestore process in our build server every single Kentico object is updated regardless of whether an actual change was made... and in that list of hundreds and hundreds of items in the Staging task list is our update page type with the new property.
The issue is that we have no way of identifying what's actually changed and subsequently what needs to be staged from our build server through to the test instance of Kentico. We don't want to stage everything as there are hundreds and hundreds of items.
We've reviewed the repository.config file and have made some changes to exclude many object types. And we initially thought that we could use this approach to just include the page types (and other objects) that we want to monitor in the CI process, however this config works in an exclude manner rather than an inclusive manner. So we'd have to add an entry to exclude every object by name which seems a bit error-prone and redundant.
I'm hoping someone's been through this pain and I'm looking for suggestions on how we might handle this challenge.
Check out this thread on devnet. You can actually write a global event handler to tell the system to not generate tasks in the staging module based on certain conditions.
You could also try excluding all objects and then using the IncludedObjectTypes as a whitelist for just the ones that you want. Check out this documentation.
In general CI does take some time to setup and get correct in our experience. This Kentico CI cheat sheet can be helpful as well.

Microservices and isolated persistence - how should the data be stored/fetched?

At my company, we're about to move to the micro services architecture. I read a lot about it, and there are tons of obscure areas where it's specific to the project built, but one area seems to get everyone to agree, microservices need to have isolated persistence or another way to say it, they need to have they own database.
Now I love the idea, that means every microservice has its own database schema, its own domain objects and is 100% independent of any other microservice data structure.
There are things I don't quite understand though.
The "Customer Service" is obviously central to the application, and we can see that basically any other microservice will need some data about the user at some point. Whether it'd be the user's credit amount, its ID, or its name.
But since other microservices can't directly read into the Customer Service database, they'll need to query this service over and over again. This is fine (I guess) for simple stuff like getting the name of current logged user, but when we need to display 60 users on a page and we can't do any SQL join, it feels like we're missing something. This is even worse when microservices depend upon tons of microservices.
So I found out that some people actually queried microservices X times a day to get data into their own microservices.
So if microservice "Search" needs data from "Product", "Customer", it'll actually query these microservices and will persist the data with its own data structure.
The question I have is should it be "Search" that queries "Product" and "Customer", or should "Product" and "Customer" send data to "Search" ?
The first option looks a bit easier to do, we only need to have this logic on one side, and that's where the data is needed. But we'll only get static freshness of data which is not very smart, but could definitely work.
The second option looks a bit more difficult but more scalable too, because we could have very fresh data when we need it, since the data changed where it's sent, it could also be more granular.
I think you correctly identified downsides to the microservices approach! And there are no elegant solutions to these specific problems. You will have to eat the additional work and architecture deterioration that this brings.
Concretely addressing your question now:
The question I have is should it be "Search" that queries "Product" and "Customer", or should "Product" and "Customer" send data to "Search" ?
You seem to be looking for a data synchronization service. You want to decide between push and pull. You are concerned about data freshness and logic duplication.
The key point here is that the source service cannot know about its consumers. This is to prevent an unwanted reverse dependency. This would break architectural isolation. Any data sync process that maintains this is fine. You can do what is most convenient.
For example, you could make the data source expose two APIs:
An API to get the whole data set. This would be called periodically by the destination (e.g. nightly). It can also be used to seed the destination at will and to fix data errors there.
A feed of changes in the source database keyed by the date and time the change occurred. The destination can now poll that change feed very frequently (e.g. every few seconds or minutes) and apply the small delta that occurred.
You can even build a realtime change feed through a publish-subscribe middleware. Many message queue softwares can do that. The source would just send out changes to the middleware.
Building all of this is conceptually simple but takes a lot of work. It also creates lots of ongoing work and increases the potential for bugs. Debugging becomes much harder. I have worked on systems like that.
I'm going to add a subjective note: Microservices are not well understood by many teams. The downsides are often ignored. You identified a few of the downsides correctly and they are nasty! Given what I read on the web I believe many teams do not realize the mess they are getting themselves into. Managing disparate data stores can be a nightmare. This is not a one-time "mess" but an ongoing one.
As an alternative I'd recommend using a common data store and building services simply as classes or projects that live in the same process. This gives you the microservices code structuring with the convenience of normal development. It also leaves a few of the upsides of microservices on the table.
your identification of the problem is correct.
But the solution to your problem will depend on use case to use case.
In your example of search service , product service and customer service should publish their events on kafka or similar messaging and search service listen to them and updates it.
In case of lets say in order service while creating an order for a customer , you want to check customer exists , then you might do it by calling the sync api of customer service , but for that also there are variour other approaches , i have answered here linking Microservices and allowing for one to be unavailable
From my perspective sync communication between services should be avoided , and there are way around for this , above link would help
You can use domain driven design philosophy to correctly break your services and their contract

Symfony2 multiple kernel?

I have application which has core website, api and admin area. I wanted to know is it bad idea to have everything in one app or should I create different Symfony2 project or should I split them into different kernels?
I'm not sure if adding lots of bundles on same kernel will effect performance a lot or is just a little bit, which does not matter?
Following are options:
keep everything on same kernel, it wont make much difference
have multiple kernel for different part of application (api, admin and core website)
create different Symfony2 project for admin area and api.
or your wise words :)
You can define more "environments".
For example :
In AppKernel.php
public function registerBundles()
{
$bundles = array(
new Symfony\Bundle\FrameworkBundle\FrameworkBundle(),
new Symfony\Bundle\SecurityBundle\SecurityBundle(),
new Symfony\Bundle\TwigBundle\TwigBundle(),
new Symfony\Bundle\MonologBundle\MonologBundle(),
new Symfony\Bundle\SwiftmailerBundle\SwiftmailerBundle(),
new Doctrine\Bundle\DoctrineBundle\DoctrineBundle(),
new Sensio\Bundle\FrameworkExtraBundle\SensioFrameworkExtraBundle(),
//new AppBundle\AppBundle()
);
if (in_array($this->getEnvironment(), array('api'), true)) {
$bundles[] = new ApiBundle\ApiBundle();
//-- Other bundle
}
//-- Other environments
return $bundles;
}
}
It mostly depends on bundles quality. And this how much connected they are.
I would reject point 3 at start (create different Symfony2 project for admin area and api.) - as probably you don't build two separate applications.
Have multiple kernel for different part of application (api, admin and core website)
Common problem is created by Listeners and services in container. Especially when your listener should work only in one of app contexts (api/frontend/backend). Even if you remember to check it at very beginning of listener method (and do magic only in wanted context) then still listener can depend on injected services which need to be constructed and injected anyway. Good example here is FOS/RestBundle: even if you configure zones then still on frontend (when view_listener is activated for api) view_handler is initialized and injected to listener - https://github.com/FriendsOfSymfony/FOSRestBundle/blob/master/Resources/config/view_response_listener.xml#L11 I'm not sure for 100% here but also disabling translations and twig (etc.) for API (most of api's don't need it) will speed it up.
Creating separate Kernel for API context would solve that issue (in our project we use one Kernel and we had to disable that listener - as blackfire.io profiles were telling us that it saves ~15ms on every fronted request).
Creating new Kernel for API would make sure that none of API-only services/listeners will not interfere with frontend/backend rendering (it work both ways). But it will create for you additional work of creating shared components used in many bundles inside project (those from different kernels) - but in world with composer it's not a huge task anymore.
But it's case only for people who measure every millisecond of response time. And depends on your/3dparty bundles quality. If all there is perfectly ok then you don't need to mess with Kernels.
It's personal choice, but I have a similar project and I have a publicBundle, adminBundle and apiBundle all within the same project.
The extra performance hit is negliable but organisation is key ... that is why we're using an MVC package (Symfony) in the first place, is it not? :)
NB: You terminology is a little confusing, I think by Kernel you mean Bundle.
Have several kernels could not necessarily help.
Split your application in bundles and keep all advantages of sharing your entities (and so on) through the different parts of your application.
You can define separated routing/controllers/configuration that are loaded depending on the host/url.
Note :
If you are going to separate your app in two big bundles (i.e. Admin & Api),
and that the two share the same entities, you will surely have to do a choice.
This choice may involves that one of your bundles contains too much (and non related) logic and will need to be refactored in several bundles later.
Create a bundle per section of your application that corresponds to a set of related resources and make difference between the two parts through different contexts from configuration.
Also, name your classes/namespaces sensibly.

Do Different CRM Orgs Running On The Same Box Share The Same App Domain?

I'm doing some in memory Caching for some Plugins in Microsoft CRM. I'm attempting to figure out if I need to be concerned about different orgs populating the same cache:
// In Some Plugin
var settings = Singleton.GetCache["MyOrgSpecificSetting"];
// Use Org specific cached Setting:
or do I need to do something like this to be sure I don't cross contaminate settings:
// In Some Plugin
var settings = Singleton.GetCache[GetOrgId() + "MyOrgSpecificSetting"];
// Use Org specific cached Setting:
I'm guessing this would also need to be factored in for Custom Activities in the AsyncWorkflowService as well?
Great question. As far as I understand, you would run into the issue you describe if you set static data if your assemblies were not registered in Sandbox Mode, so you would have to create some way to uniquely qualify the reference (as your second example does).
However, this goes against Microsoft's best practices in Plugin/Workflow Activity development. Every plugin should not rely on state outside of the state that is passed into the plugin. Here is what it says on MSDN found HERE:
The plug-in's Execute method should be written to be stateless because
the constructor is not called for every invocation of the plug-in.
Also, multiple system threads could execute the plug-in at the same
time. All per invocation state information is stored in the context,
so you should not use global variables or attempt to store any data in
member variables for use during the next plug-in invocation unless
that data was obtained from the configuration parameter provided to
the constructor.
So the ideal way to managage caching would be to use either one or more CRM records (likely custom) or use a different service to cache this data.
Synchronous plugins of all organizations within CRM front-end run in the same AppDomain. So your second approach will work. Unfortunately async services are running in separate process from where it would not be possible to access your in-proc cache.
I think it's technically impossible for Microsoft NOT to implement each CRM organization in at least its own AppDomain, let alone an AppDomain per loaded assembly. I'm trying to imagine how multiple versions of a plugin-assembly are deployed to multiple organizations and loaded and executed in the same AppDomain and I can't think of a realistic way. But that may be my lack of imagination.
I think your problem lies more in the concurrency (multi-threading) than in sharing of the same plugin across organizations. #BlueSam quotes Microsoft where they seem to be saying that multiple instances of the same plugin can live in one AppDomain. Make sure multiple threads can concurrently read/write to your in-mem cache and you'll be fine. And if you really really want to be sure, prepend the cache key with the OrgId, like in your second example.
I figure you'll be able to implement a concurrent cache, so I won't go into detail there.

How to provision OSGi services per client

We are developing a web-application (lets call it an image bank) for which we have identified the following needs:
The application caters customers which consist of a set of users.
A new customer can be created dynamically and a customer manages it's users
Customers have different feature sets which can be changed dynamically
Customers can develop their own features and have them deployed.
The application is homogeneous and has a current version, but version lifting of customers can still be handled individually.
The application should be managed as a whole and customers share the resources which should be easy to scale.
Question: Should we build this on a standard OSGi framework or would we be better of using one of the emerging application frameworks (Virgo, Aries or upcoming OSGi standard)?
More background and some initial thoughts:
We're building a web-app which we envision will soon have hundreds of customers (companies) with hundreds of users each (employees), otherwise why bother ;). We want to make it modular hence OSGi. In the future customers themselves might develop and plugin components to their application so we need customer isolation. We also might want different customers to get different feature sets.
What's the "correct" way to provide different service implementations to different clients of an application when different clients share the same bundles?
We could use the app-server approach (we've looked at Virgo) and load each bundle once for each customer into their own "app". However it doesn't feel like embracing OSGi. We're not hosting a multitude of applications, 99% of the services will share the same impl. for all customers. Also we want to manage (configure, monitor etc.) the application as one.
Each service could be registered (properly configured) once for each customer along with some "customer-token" property. It's a bit messy and would have to be handled with an extender pattern or perhaps a ManagedServiceFactory? Also before registering a service for customer A one will need to acquire the A-version of each of it's dependencies.
The "current" customer will be known to each request and can be bound to the thread. It's a bit of a mess having to supply a customer-token each time you search for a service. It makes it hard to use component frameworks like blueprint. To get around the problem we could use service hooks to proxy each registered service type and let the proxy dispatch to the right instance according to current customer (thread).
Beginning our whole OSGi experience by implementing the workaround (hack?) above really feels like an indication we're on the wrong path. So what should we do? Go back to Virgo? Try something similar to what's outlined above? Something completely different?!
ps. Thanks for reading all the way down here! ;)
There are a couple of aspects to a solution:
First of all, you need to find a way to configure the different customers you have. Building a solution on top of ConfigurationAdmin makes sense here, because then you can leverage the existing OSGi standard as much as possible. The reason you might want to build something on top is that ConfigurationAdmin allows you to configure each individual service, but you might want to add a layer on top so you can more conveniently configure your whole application (the assembly of bundles) in one go. Such a configuration can then be translated into the individual configurations of the services.
Adding a property to services that have customer specific implementations makes a lot of sense. You can set them up using a ManagedServiceFactory, and the property makes it easy to lookup the service for the right customer using a filter. You can even define a fallback scenario where you either look for a customer specific service, or a generic one (because not all services will probably be customer specific). Since you need to explicitly add such filters to your dependencies, I'd recommend taking an existing dependency management solution and extending it for your specific use case so dependencies automatically add the right customer specific filters without you having to specify that by hand. I realize I might have to go into more detail here, just let me know...
The next question then is, how to keep track of the customer "context" within your application. Traditionally there are only a few options here, with a thread local context being the most used one. Binding threads to customers does tend to limit you in terms of implementation options though, as in general it probably means you have to prohibit developers from creating threads themselves, and it's hard to off-load certain tasks to pools of worker threads. It gets even worse if you ever decide to use Remote Services as that means you will completely loose the context.
So, for passing on the customer identification from one component to another, I personally prefer a solution where:
As soon as the request comes in (for example in your HTTP servlet) somehow determine the customer ID.
Explicitly pass on that ID down the chain of service dependencies.
Only use solutions like the use of thread locals within the borders of a single bundle, if for example you're using a third party library inside your bundle that needs this to keep track of the customer.
I've been thinking about this same issue (I think) for some time now, and would like your opinions on the following analogy.
Consider a series of web application where you provide access control using a single sign-on (SSO) infrastructure. The user authenticates once using the SSO-server, and - when a request comes in - the target web application asks the SSO server whether the user is (still) authenticated and determines itself if the user is authorized. The authorization information might also be provided by the SSO server as well.
Now think of your application bundles as mini-applications. Although they're not web applications, would it still not make sense to have some sort of SSO bundle using SSO techniques to do authentication and to provide authorization information? Every application bundle would have to be developed or configured to use the SSO bundle to validate the authentication (SSO token), and validate authorization by asking the SSO bundle if the user is allowed to access this application bundle.
The SSO bundle maintains some sort of session repository, and also provides user properties, e.g. information to identify the data repository (of some sort) of this user. This way you also wouldn't pass trough a (meaningful) "customer service token", but rather a cryptic SSO-token that is supplied and managed by the SSO bundle.
Please not that Virgo is an OSGi container based on Equinox, so if you don't want to use some Virgo-specific feature, you don't have to. However, you'll get lots of benefits if you do use Virgo, even for a basic OSGi application. It sounds, though, like you want web support, which comes out of the box with Virgo web server and will save you the trouble of cobbling it together yourself.
Full disclosure: I lead the Virgo project.

Resources