Wakanda Shared Storage equivalent in v10 - wakanda

Due to compatibility issues, my project must remain in Wakanda 10. What is the best technique to keep a variable consistent across multiple server threads? For instance, if I want to make an object literal that can be modified, how can I best ensure the data is updated across all Wakanda Server threads?
For now, I am going to write the value to the datastore as a work around. Any better suggestions would be appreciated. Would a shared worker help me?

Web workers can't access global variables. you can handle communication between web workers via Message Passing.
To make an object available for all workers you can :
Pass the object from a worker to another one using postMessage;
Store the object in the database.
I believe the the best way is to store your variable in the datastore. it's more simple especially if you have a lot of workers.
Here are some related discussions:
Shared worker do share variables
Sharing variables between web workers? [global variables?]

Related

Do Different CRM Orgs Running On The Same Box Share The Same App Domain?

I'm doing some in memory Caching for some Plugins in Microsoft CRM. I'm attempting to figure out if I need to be concerned about different orgs populating the same cache:
// In Some Plugin
var settings = Singleton.GetCache["MyOrgSpecificSetting"];
// Use Org specific cached Setting:
or do I need to do something like this to be sure I don't cross contaminate settings:
// In Some Plugin
var settings = Singleton.GetCache[GetOrgId() + "MyOrgSpecificSetting"];
// Use Org specific cached Setting:
I'm guessing this would also need to be factored in for Custom Activities in the AsyncWorkflowService as well?
Great question. As far as I understand, you would run into the issue you describe if you set static data if your assemblies were not registered in Sandbox Mode, so you would have to create some way to uniquely qualify the reference (as your second example does).
However, this goes against Microsoft's best practices in Plugin/Workflow Activity development. Every plugin should not rely on state outside of the state that is passed into the plugin. Here is what it says on MSDN found HERE:
The plug-in's Execute method should be written to be stateless because
the constructor is not called for every invocation of the plug-in.
Also, multiple system threads could execute the plug-in at the same
time. All per invocation state information is stored in the context,
so you should not use global variables or attempt to store any data in
member variables for use during the next plug-in invocation unless
that data was obtained from the configuration parameter provided to
the constructor.
So the ideal way to managage caching would be to use either one or more CRM records (likely custom) or use a different service to cache this data.
Synchronous plugins of all organizations within CRM front-end run in the same AppDomain. So your second approach will work. Unfortunately async services are running in separate process from where it would not be possible to access your in-proc cache.
I think it's technically impossible for Microsoft NOT to implement each CRM organization in at least its own AppDomain, let alone an AppDomain per loaded assembly. I'm trying to imagine how multiple versions of a plugin-assembly are deployed to multiple organizations and loaded and executed in the same AppDomain and I can't think of a realistic way. But that may be my lack of imagination.
I think your problem lies more in the concurrency (multi-threading) than in sharing of the same plugin across organizations. #BlueSam quotes Microsoft where they seem to be saying that multiple instances of the same plugin can live in one AppDomain. Make sure multiple threads can concurrently read/write to your in-mem cache and you'll be fine. And if you really really want to be sure, prepend the cache key with the OrgId, like in your second example.
I figure you'll be able to implement a concurrent cache, so I won't go into detail there.

Standalone CQ5 publish instances without cluster

I am looking for the best solution for having multiple publish instances. I tried shared nothing and shared datastore configurations.
Is there any advantages or disadvantages of having 2 or more publish instances without cluster setup? In such a configuration how can I start a new publish instance? I mean how I should replicate the data from the author when I start a new publish instance (probably from backup), what are the best practices for this. How to solve reverse replication issues, so while I starting a new instance, other publish instance might get some new user generated data which must be replicated to the new publish instance too. What is your experience in this topic?
Thanks in advance!
Multiple publish instances allow you to use a load balancer & increase the amount of concurrent users your site can serve.
Replcation of content can be done in the standard way, using CRX Package replcation or the Activation tree, once you've set up a replication agent per instance.
You shouldn't run into too much difficulty with content generated while the instance is being set up, as once the instance is up & running, you can just activate it in the normal way. (If you think it will be an issue, you could probably set-up the replication agent first, so that new content is queued while the instance is being set-up, though I couldn't see why you would need to do this).

How to achieve high avaliability?

I'm about to build a new system and I want maximum availability! I'll have to use Windows!
I will have clients talking to my system using webservices. I'll also get data from surrounding systems. This data is delivered using messaging, MQ-series and MSMQ.
The system will produce some data that is sent back to the surrounding systems using queues.
After new data has come to the system different processes will use this data to do diffrent tasks, like printing, writing to databases etc.
To achieve high availablity I'm planning to have two versions of the system running in parallel on two different machines. The clients will try to use the first server thats responds correctly.
I think an ideal soultion would be that the incomming data from anyone of the two servers is placed in a COMMON queue(on a third machine?). Data in the queue can be picked up by processes on both servers(think producer-consumer pattern).
I think that maybe NServiceBus will suits my needs. I have a few questions according to the above.
Can a queue be shared between two servers? I dont want data to be stuck on a server if its gets down. I that case I want the other server to keep processing.
Can two(or more) "consumers"/processes on different machines pick data from a common queue?
Any advice is welcome!
The purpose of NSB distributor is not to address availability issues but to address scale issues, distributors help scaling out systems at a low cost.
By looking at the description, your system consist of WebService endpoits, multiple databases and queuing infrastructure. If you want to achieve complete high-availability you will have to make sure there are no single points of failures. In order to do that you will need,
A load balanced web farm for web service endpoints (2 or more servers)
Application cluster for queues and applications that relies on those queues.
Highly available database server, again clustered.
On top of everything a good SAN.
But if you are referring to being available to consumers, you just have to make sure target queues and webservice endpoints are available. And making sure the overall architecture promotes deferred execution.
Two or more applications can read a MSMQ queue remotely but thats something you don't want to do since it's based on DTC. And that's a real performance killer.
Some references
[http://blogs.msdn.com/b/clustering/archive/2012/05/01/10299698.aspx][1]
[http://msdn.microsoft.com/en-us/library/ms190202.aspx][2]
In short you will want to use the distributor... http://support.nservicebus.com/customer/portal/articles/859556-load-balancing-with-the-distributor
The key thing here is that the distributor node is a single point of failure so you want to run it on a cluster.

Multiple webRole instances at Azure and session state

I have webRole with some data stored in Session. The data is some tens of small variables (strings), and one-two big objects (some megabytes). I need to run this webRole in multiple instances. Since two requests from the single user can go to different instances, Session became useless. So, i am looking for most efficient and simplest method of storing volatile user data for this case. I know that i can store it in cookies at client side, but this will fail for big objects. I also know that i can user data in Azure storage - but this seems to be more complicated than Session. Can anybody suggest both efficient and simple method, like Session state? Or probably some workaround to get Session state working correctly when multiple instances enabled.
This may help
http://social.msdn.microsoft.com/Forums/en-US/windowsazure/thread/7ddc0ca8-0cc5-4549-b44e-5b8c39570896
You need to use another session state storage than memory. In Azure you can use Cache, Storage tables or SQL server to share session data between instances.

What's the best place for a database-backed, memory-resident global cache in an ASP.NET web server?

I have to cache an object hierarchy in-memory for performance reasons, which reflects a simple database table with columns (ObjectID, ParentObjectID, Timestamp) and view CurrentObjectHierarchy. I query the CurrentObjectHierarchy and use a hash table to cache the current parents of each object for quickly looking up the parent object ID, given any object ID. Querying the database table and constructing the cache is a 77ms operation on average, and ideally this refresh occurs only when a method in my database API is called that would change the hierarchy (adding/removing/reparenting an object).
Where is the best place for such a cache, if it must be accessed by multiple ASP.NET web applications, possibly running in different application pools?
Originally, I was storing the cache in a static variable in a C# dll shared by the different web applications. The problem, of course, is that while static variables can be accessed across threads, they cannot be accessed across processes, which is a problem when multiple web-apps are involved (possibly running in separate application pools). As a result... synchronized, thread-safe modifications to the object hierarchy cache in one application are not reflected in other applications, even though they are using the same code-base.
So I need a more global location for this cache. I cannot use static variables (as I just explained), session state (which is basically a per-user store), and application state (needs to be accessible across applications).
Potential places I've been considering are:
Some kind of global object storage within IIS itself, accessible from any thread in any application in any application pool (if such a place exists. Does it?)
A separate, custom web service that manages an exclusive cache.
Right now, I think the BEST solution is SQL CLR integration, because:
I can keep my current design using static variables
It's a separate service that already exists, so I don't have to write a custom one
It will be running in a single process (SQL Server), so the existing lock-based synchronization will work fine
The cache would be setting as close as possible to the data structures it represents!
I would embed the hierarchy-traversing methods in the SQL CLR DLL, so that I could make a single SQL call where I would normally make a regular method call. This all depends on SQL Server running in a single process and the CLR being loaded into that process, which I think is the case. What do you think of this? Can you see anything obviously wrong with this idea that I may be missing? Is this not an awesome idea?
EDIT:
After looking more closely, it seems that different ASP.NET applications actually run in the same process, but are isolated by AppDomains. If I could find a way to share and synchronize data across AppDomains, that would be very very useful. I'm reading about .NET Remoting now.
Microsoft is working on a distributed caching framework: Velocity. However, the latest release is a CTP3 version, so it may not be production ready...

Resources