Since no one answered this question:
What issues to consider when rolling your own data-backend for Silverlight / AJAX on non-ASP.NET server?
Let me ask it another way:
How does WCF RIA Services handle authentication/authorization/security at a low level?
e.g. how does the application on the server determine that the incoming http request to change data is coming from a valid client and not from non-desirable source, e.g. a denial-of-service bot?
From my investigation, all calls into the RIA service classes are forced through a custom IOperationInvoker class (enforced by a custom IOperationBehavior class). This invoker calls into DomainService to have the operation executed.
Before it is executed, the method call is validated any/all AuthorizationAttribute attributes marked on the operation in question. Each AuthorizationAttribute (two provided are RequiresAuthenticationAttribute & RequiresRoleAttribute) is given an opportunity to accept or reject the call via the abstract IsAuthorized method.
If any of these attributes returns something other than "AuthorizationResult.Allowed", an UnauthorizedAccessException exception is thrown with the ErrorMessage from the AuthorizationResult returned.
Related
I am developing a Laravel application and using a Service layer pattern to isolate business logic. What I come across in all tutors/articles is passing the HTTP request object from the controller directly into the service. To me, it goes against the principle of a service being an API-independent piece of code that has a single responsibility for a certain functionality. Imagine I would like to call the service from the command line or from an event handler, I would then have to construct an HTTP Request object to pass to the controller.
Same goes for validation: as far as I understand, the validator would on failure either redirect the user back (which may have no sense in the case of command line or event handler) or return an HTTP error.
On the other hand, with a lot of form fields, there should be some structure to pass the data in, and the form itself already gives such structure.
What are best practices regarding this?
I have an in-memory object graph accessible via ASP.NET WEB API 2. GET, POST, PUT, DELET code executes correctly except that the accessed collection is "untouched" when the next action methon is called. I use Fiddler to test and my own clients. Looks like the collection is reconstructed for every call.
I however need a single object graph that is accessed by all clients. Can Web API be configured to use singleton data, like WCF? Or do I have to make the data a singleton myself? I am testing in VS.2013, I don't have a dedicated OWIN host yet.
Create a message handler class that derives from DelegatingHandler.
Pass your graph into the constructor.
Add an instance of your handler to the config.MessageHandlers collection.
When a request passes through your message handler, add your graph to the properties collection of the request.
Create an extension method to make it easy to pull the graph out of the request object.
Make sure your graph is thread safe.
I create a new Split-Join (in the OSB workshop application). Then
I use an action "Invoke Service" to call a not secured business service. So far no problem. When I assign a security policy to my business service, the OSB does not accept. Here is the error message in the OSB workshop:
[Parallel, Scope, Invoke Service]
The WSDL Binding for BusinessService "OSB/1_0/BusinessServices/TestBS" is not supported: The service feature "WS-Security" is not supported.
How can I call a secured business service in a splitJoin?
Thanks
I'll put a little more expanded version of the correct answer of user2364825.
Split-Join is actually a "window" into an older product (that's why it looks and behave differently from OSB). That product has some limitation, including inability to work with WS_POLICY.
There are two commonly used workarounds for that.
Approach #1. Make a version of the same WSDL stripped of WS_POLICY and use it in the Split-Join. From the Split-Join, call the intermediate proxy with that stripped WSDL which in turn calls a business service with the original WSDL.
BizService(Stripped WSDL)->Split-Join->Proxy2(Stripped WSDL)->BizService(Real WSDL)
That approach only works if the WS_POLICY headers are created by OSB code.
If the message going via Split-Join already has some SOAP Headers (including Policies), those are going to be lost, and the approach #1 is not working.
Approach #2. Make a custom WSDL which wraps the original message with all its SOAP Headers and whatnot. Use that WSDL for Split-Join, pass the wrapped message to an unwrapping proxy, and then call the real proxy/biz.
BizService(Wrapper WSDL)->Split-Join->Proxy2(Wrapper WSDL)->BizService(Real WSDL)
The second approach is more complex, but also more powerful. For instance, it easily can be extended to support user headers (Split-Join doesn't support them too), passing debug information and pretty much anything else.
This approach is implemented in my GenericParallel service which does all above and some more.
I also have a blog post outlining passing the SOAP Headers via Split-Join in a bit more details. (The WS_Policy is just a SOAP Header after all).
YOu can never a call a WSDL based proxy/Business Service that has WS_POLICY defined in the WSDL. You need to have a intermediate business/proxy to pass the message to the WS-policy containg WSDL service.
Before WebAPI, I did all client-side remote validation calls using regular MVC action methods. With WebAPI, I can now have POST, PUT, DELETE, and GET methods on an ApiController. However, validation still needs to happen.
I have successfully been able to put remote validation action methods on an ApiController and get them to work. Before submitting a POST, PUT, or DELETE for a resource, the client can POST to one or more validation URL's to validate user input and receive appropriate validation messages.
My question is, should these remote validation actions be on an ApiController? Or a regular MVC controller? It seems to me having them all in the ApiController makes the most sense, because that class can then encapsulate everything having to do with resource (and resource collection) mutations.
Update: reply #tugberk
I should elaborate. First, we are not using DataAnnotations validation. There are already rich validation rules and messages configured on the domain layer commands using FluentValidation.NET. Many of the validation classes use dependency injection to call into the database (to validate uniqueness for example). FluentValidation has good pluggability with MVC ModelState, but I have not found a good way to plug it into WebAPI ModelState yet.
Second, we are doing validation at the POST, PUT, and DELETE endpoints. Clients do not need to know the validation endpoints in order to discover what went wrong. Here is an example:
var command = Mapper.Map<CreateCarCommand>(carApiModel);
try
{
_createHandler.Handle(command);
}
catch (ValidationException ex)
{
return Request.CreateResponse(HttpStatusCode.BadRequest, ex.Message);
}
Clients will get a 400 response along with a message indicating what went wrong. Granted, this is not as granular as the response in the example you linked to. Because we are just returning a string, there is no easy way to parse out which fields each validation message belongs to, which is needed for our own HTML + javascript client of the API. This is why I spiked out adding more granular validation endpoints (as a side note, they are consumed by field-specific knockout-validation calls on our javascript client).
I am assuming that you are referring to something similar to ASP.NET MVC Remote Validation by saying remote validation. In that case, I don't think that your HTTP API needs a remote validation. Think about a scenario where I need to consume your HTTP API with my .NET application and assume that you have a remote validation. Two things bother me here:
That remote validation is not discoverable unless you are providing a .NET client for your API by yourself and put that logic inside that client.
Assuming that the remote validation is there for the .NET client and the application will make a validation call to the server before sending the actual request, this is just a overkill.
In my opinion, the user send a request to your API and you should make the validation there. You can find a sample from the following URL:
ASP.NET Web API and Handling ModelState Validation
In every MVC framework I've tried (Rails, Merb, Waves, Spring, and Struts), the idea of a Request (and Response) is tied to the HTTP notion of a Request. That is, even if there is an AbstractRequest that is a superclass of Request, the AbstractRequest has things like headers, request method (GET, POST, etc.), and all of the other things tied to HTTP.
I'd like to support a request-response cycle over SMS, Twitter, email, or any other medium for which I can make an adapter. Is there a framework that does this particularly well?
The only other option I've thought of is creating, for example, a Twitter poller that runs in a separate thread and translates messages into local HTTP requests, then sends the responses back out.
If there were a good framework for multiple request media, what would routing look like? In Rails, the HTTP routing looks something like:
map.connect 'some/path/with/:parameter_1/:paramter_2', :controller => 'foo', :action => 'bar'
How would a Twitter or SMS route look? Regular expressions to match keywords and parameters?
I haven't seen one. The issue is that the request is also tied to the host, and the response is tied to the request.
So if you get a request in via email, and a controller says to render view "aboutus", you'd need the MVC framework to know how to :
get the request in the first place - the MVC framework would almost need to be a host (IIS doesn't get notified on new emails, so how does your email polling code get fired?)
allow flexible route matching - matching by path/url wouldn't work for all, so request-specific controller routing would be needed
use the aboutus email view rather than the SMS or HTTP view named "aboutus"
send the response out via email, to the correct recipient
A web MVC framework isn't going to cut it - you'll need a MVC "host" that can handle activation through web, sms, email, whatever.
The Java Servlet specification was designed for Servlets to be protocol neutral, and to be extended in a protocol-specific way - HttpServlet being a protocol-specific Servlet extension. I always imagined that Sun, or other third poarty framework providers, would come up with other protocol-specific extensions like FtpServlet or MailServlet, or in this case SmsServlet and TwitterServlet.
Instead what has happened is that people either completely bypassed the Servlet framework, or have built their protocols on top of HTTP.
Of course, if you want to implement a protocol-specific extension for your required protocols, you would have to develop the whole stack - request object, response object, a mechanism of identifying sessions (for example using the MSISDN in an SMS instead of cookies), a templating and rendering framework (equivalent of JSP) - and then build an MVC framework on top of it.
You seem to be working mostly with Java and/or Ruby, so forgive me that this answer is based on Perl :-).
I'm very fond of the Catalyst MVC Framework (http://www.catalystframework.org/). It delegates the actual mapping of requests (in the general, generic sense) to code via engines. Granted, all the engine classes are currently based on HTTP, but I have toyed with the idea of trying to write an engine class that wasn't based on HTTP (or was perhaps tied to something like Twitter, but was separated from the HTTP interactions that Twitter uses). At the very least, I'm convinced it can be done, even if I haven't gotten around to trying it yet.
You could implement a REST-based Adapter over your website, which replaces the templates and redirects according to the input parameters.
All requestes coming in on api.yourhost.com will be handled by the REST based adapter.
This adapter would allow to call your website programmatically and have the result in a parseable format.
Practically this means: It replaces the Templates with an own Template Engine, on which this things happen:
instead of the assigned template, a generic xml/json template is called, which just outputs a xml that contains all template vars
then you can make your Twitter Poller, SMS Gateway or even call it from Javascript.