MarshallByRefObject - .net-remoting

Just a quick one for an expert in the field :)
Can I use the marshallbyrefobject class to get objects to be reference across a network, rather than just across application domains?
If not, is there another set of classes to do this? i.e. reference and use an object across a network where that object is processed and stored at the remote location.
Thanks a million,
Mike

Yes, MarshalByRefObject is the base class for remotely accessible objects in .NET Remoting. It works whether the remote object is in another application domain in the same process or on an application domain on the other side of the globe.
By the way, .NET Remoting is considered obsolete in favor of newer technologies like WCF. While it's still suitable for some applications, most new applications should consider using WCF.

Related

Handling large object in stateless environment

We have various windows services that load up a large amount of data i.e. mostly settings, from a database into an object which is used whenever calls are made to our various .net remoting functions (I know it's old!!). Having this object containing all these settings in memory saves us having the query the database constantly or load the data from a cache whenever queries are executed.
Settings in this "large" object are collections of data, from id, path, text, etc...
We want to move away from .net remoting to wcf and potentially get rid of our windows services and run the lot under IIS (and eventually Azure), but being stateless, I'm wondering how should we handle this?
1) What's the best method you can think of? From experience preferrably.
One suggestion that was made to me was to return all of this to the client, cache it and use only the relevant settings when making a wcf call.
2) Numerous services we have are polling services, constantly monitoring, databases, file locations, ftp locations, etc... How would you recommend to handle this in a stateless environment?? I can't see how this will be handled.
We use SQL Server, but I don't want to rely too heavily on the build-in features as we could potentially have to suppor the likes of mySQL & Oracle.
Thanks.
Thierry
You could store these settings in the AppSettings section of the config file (Web.config for IIS). Using the ConfigurationManager class, you can retrieve the relevant values as needed.
If you prefer to store a static instance of your settings object, suggest implementing a Singleton pattern for the same. Jon Skeet's article is a great starting point.
Hope this helps.

Calling a WebMethod of one application from another application?

How can I call a webmethod of one application from another application, when both are developed in C#?
You can't do this, of course. It would be a huge security hole.
As suggested by your tag, it would be necessary for the developer of an application to explicitly expose to the world the methods he wants to be called from other applications. This could be done through WCF, or possibly through COM.
Alternatively, the code to be called caould be placed into a class library, and referenced by both projects.
Expose the method through SOAP or REST or COM or (going old-school) CORBA or ...
Be prepared that doing this is a massive increase in the complexity of the applications. You start to have to worry properly about security, and about how all the pieces interact, and many other issues. There's a lot of depth here, far too much for a simple answer.
This can be done using WCF instead of webservice

Should cluster support be at the application or framework level?

Lets say you're starting a new web project that required the website to run on and MVC framework on Mono. A couple major requirements are that it has to scale easy, be stable and work with multiple servers that may or may not be in the same place or even on the same local network.
The first thing I thought of was a sort of cluster communication between servers. Each server would act as a node and be its own standalone application and would query other nodes in a known list for session information and things like that.
But one major design questions I have is should this functionality be built into the supporting framework or should the application handle the synchronization of the data?
Or am I just way off and this would never work?
Normaly clustering rather belongs to some kind of middleware layer, thus on your framework level. However it can also be implemented on the application level.
It depends on your exact use, if you want load balancing, scalability etc.

What's the best place for a database-backed, memory-resident global cache in an ASP.NET web server?

I have to cache an object hierarchy in-memory for performance reasons, which reflects a simple database table with columns (ObjectID, ParentObjectID, Timestamp) and view CurrentObjectHierarchy. I query the CurrentObjectHierarchy and use a hash table to cache the current parents of each object for quickly looking up the parent object ID, given any object ID. Querying the database table and constructing the cache is a 77ms operation on average, and ideally this refresh occurs only when a method in my database API is called that would change the hierarchy (adding/removing/reparenting an object).
Where is the best place for such a cache, if it must be accessed by multiple ASP.NET web applications, possibly running in different application pools?
Originally, I was storing the cache in a static variable in a C# dll shared by the different web applications. The problem, of course, is that while static variables can be accessed across threads, they cannot be accessed across processes, which is a problem when multiple web-apps are involved (possibly running in separate application pools). As a result... synchronized, thread-safe modifications to the object hierarchy cache in one application are not reflected in other applications, even though they are using the same code-base.
So I need a more global location for this cache. I cannot use static variables (as I just explained), session state (which is basically a per-user store), and application state (needs to be accessible across applications).
Potential places I've been considering are:
Some kind of global object storage within IIS itself, accessible from any thread in any application in any application pool (if such a place exists. Does it?)
A separate, custom web service that manages an exclusive cache.
Right now, I think the BEST solution is SQL CLR integration, because:
I can keep my current design using static variables
It's a separate service that already exists, so I don't have to write a custom one
It will be running in a single process (SQL Server), so the existing lock-based synchronization will work fine
The cache would be setting as close as possible to the data structures it represents!
I would embed the hierarchy-traversing methods in the SQL CLR DLL, so that I could make a single SQL call where I would normally make a regular method call. This all depends on SQL Server running in a single process and the CLR being loaded into that process, which I think is the case. What do you think of this? Can you see anything obviously wrong with this idea that I may be missing? Is this not an awesome idea?
EDIT:
After looking more closely, it seems that different ASP.NET applications actually run in the same process, but are isolated by AppDomains. If I could find a way to share and synchronize data across AppDomains, that would be very very useful. I'm reading about .NET Remoting now.
Microsoft is working on a distributed caching framework: Velocity. However, the latest release is a CTP3 version, so it may not be production ready...

Querying list items and using SharePoint web services vs the object model

My company is looking into writing a custom application that will need to perform many list item queries across multiple site collections. It will need to run for WSS 3.0 and it 'would be nice' if it worked on WSS 2.0 as well. It won't be designed for MOSS/SPS but again it 'would be nice' if it worked on these platforms. There is no restriction on which .NET version should be used for the solution.
For this type of application, what would be better: the object model/API or SharePoint web services? The primary factor I'm considering is performance, followed by features and functionality. Thanks!
Object model is better as you can gain access to additional features and the full detail of the list items, such as the version history.
The object model is also better for performance (as long as you dispose() your spsite and spweb objects properly).
The Sharepoint object model has some differences between 2 and 3, but if you look at the reference for v2 then it will also work fully with v3.
The web services have not changed at all between v2 and v3, which explains why they do not have any new features of v3.
The reason the object model will win on performance is that you will not be serialising the data as Xml and then transmitting a large chunk of Xml, and then deserialising the Xml. The object model spares your memory and bandwidth.
The first thing to consider is "will my code run on a sharepoint server or remotely ?"
If it's running remotely, you don't
have any choice, use web services
If it's running on a sharepoint
server, I would suggest using object
model, as performance will be
better, you'll have access to more
API and authentication will be
easier (=automatic).
+1 to the other posters.
If you decide to go the OM route then you can compile for both WSS 2.0 and WSS 3.0 from the one source. These should get you started.
Developing for Sharepoint 2003 using Visual Studio 2008?
How to reference two versions of an API?
Can the OM be used inside an Infopath form? Currently I'm using the web services to pull in the list data I want but I would rather use the OM.

Resources