I would like to build some kind of JSON.API scheduler service to play web request later on my production server.
It should be possible to POST this to localhost/app/events on my development machine:
{
“schedule”: {
“start”:”2014-12-31”,
“repeat”:”annualy”
},
“requst”: {
“verb”:”POST” ,
“href”:“http://localhost/app/emails”,
“body”:{
“type”:”HappyNewYearWishes”
}
}
}
Given ASP.NET API as an implementation mechanism, how to parse and persist “request” part to database, so Production Server will be able to
POST /emails
{
“type”:”HappyNewYearWishes”
}
according to the schedule? The problem is that the deployment root is different for development machine and production server, so I cannot persist “href” as it is. What kind of mechanisms of ASP.NET Web API for route parsing, transformation, and persistence are useful here?
You can get the Route parameters using
RouteData.Values
Also you can get the eventual querystring parameters using
Request.Url.Query
With the results of these two objects stored, you can easily rebuild the URL.
Related
I'd like to know if it is possible to simulate the oAuth(1,2) authentication flow. I'd like to test without the need to connect to the provider itself. It should be possible as it is just some communication exchange. I'm not looking for something like this where they still communicate with remote server. I'd like to be completely offline, when testing.
Maybe I can run my own oAuth server. I should be using Google oAuth services so the server should behave same like they do. Does google provide some code for their oAuth server, or is it possible to create some fake server. Note the test should be more integration test. I would like to command the server to return some predefined responses. Switching to live oAuth providers will be just changing the remote URL.
Maybe just some http server is ok, I just need to care about the proper format of communicated messages.
Take a look at Client Side REST Tests section of Spring Reference docs. With this support you can easily fake the server and record desired behaviour into MockRestServiceServer.
Here are some examples I created.
Please see steps below to mock OAuth2 token to be used for faster local development using SOAPUI.
Steps:
Create a REST soapUI project, create a POST resource for URL "http://localhost:9045/oauth/token".
Create a Mock Service for above resource.
Create a Mock response as shown below, you can add your own parameters and values depending on your requirements.
{
"access_token":"MockOauth2TokenForLocaldevelopmentnTQ0NjJkZmQ5OTM2NDE1ZTZjNGZmZjI3",
"token_type":"bearer",
"expires_in":35999,
"scope":"read write",
"jti":"4d540b94-1854-45fa-b1d6-c2039d94b681"
}
Start the mock service.
Test using your local REST POST request.
Mock Response:
Mock Oauth2 SOAPUI testing:
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
I consider my self a rank beginner to OWIN and after reading a lot of documentation I have only gotten more confused with conflicting notions than before I began. I know these are multiple questions, but I feel answering these will clear most fundamental doubts regarding OWIN and how to best use it. Here are my questions:
What can I use OWIN middleware for that I couldn't already do using
message handlers or HTTP modules? Or are they both the same thing
except that the latter two are tightly coupled with IIS?
A lot of the documentation says OWIN allows for decoupling between
the web server and web application ie. removing dependency on IIS
for hosting say Web API applications. But I have yet to see an
example of some web application or web api that used OWIN and was
successfully ported from being hosted on IIS and then some other web
server. So is IIS and self hosting the only way to go for this
decoupling between web server and web app?
When I searched for OWIN middleware examples, I only got Katana and
Helios which are the only two implementations of the OWIN spec.
Katana is almost done with and wont go beyond revision3 and Helios is not yet supported by
Microsoft as per some articles. So what is the future of OWIN in
that case?
The only detailed practical usage I have seen so far is that of
using OWIN for authentication using OAuth 2. Any other such usages
of keeping an OWIN implementation in the middle?
In my startup class's Configuration method I tried to chain simple
middleware code snippets as below and to be able to see the request
being sent in :-
but got errors:
How do I see the request coming in and modify it for the next component in the middleware?
What are the various kinds of middle ware that you have plugged-in
in your projects between the web server and application?
Thanks for answering any or all of these above.
What can I use OWIN middleware for that I couldn't already do using message handlers or HTTP modules? Or are they both the same thing except that the latter two are tightly coupled with IIS?
Decoupling with IIS is part of it. OWIN middleware is a pipeline that allows certain things that are "OWIN aware" to be involved in the request, if they choose. IHttpHandler's handle a single thing - they were not chain-able. I like to compare the pipeline more to Global.asax. I've seen a lot of stuffed Global.asax handlers doing all sorts of things like authentication, authorization, spitting out HTTP headers like P3P policies, X-Frame-Options, etc. Part of the problem with this is developing reusable components from that was difficult and depended on IIS. OWIN attempts to remove those issues.
A lot of the documentation says OWIN allows for decoupling between the web server and web application ie. removing dependency on IIS for hosting say Web API applications. But I have yet to see an example of some web application or web api that used OWIN and was successfully ported from being hosted on IIS and then some other web server. So is IIS and self hosting the only way to go for this decoupling between web server and web app?
That's true for WebAPI 2 and SignalR 2. MVC 5 and older can't really be decoupled from IIS at the moment. MVC 6 will resolve this issue and is a pretty big overhaul. The ASP.NET Website has a tutorial or two on SignalR self hosting on a Console app. You'll see in the tutorial a Startup class, just as if it were running on IIS or IIS Express. The only thing the Console App does differently is it is bootstrapping a server with HttpListener in the Main method.
[comment] With respect to point #2 above, what are the owin components here? Is Katana an owin component or is it the code we write using Katana or both put together?
OWIN is really not much more an an abstraction layer, really a specification, between the web application and the web server. There are different "implementations" of OWIN depending on the server you want to run on - Katana is an OWIN implementation that runs WebAPI 2 and SignalR 2. Kestrel is another example of an OWIN implementation.
When I searched for OWIN middleware examples, I only got Katana and Helios which are the only two implementations of the OWIN spec. Katana is almost done with and wont go beyond revision3 and Helios is not yet supported by Microsoft as per some articles. So what is the future of OWIN in that case?
That's still a bit up-in-the-air, but OWIN is being used to develop the Kestrel web server that allows ASP.NET 5 Core to run on Linux / OS X.
The only detailed practical usage I have seen so far is that of using OWIN for authentication using OAuth 2. Any other such usages of keeping an OWIN implementation in the middle?
SignalR and WebAPI also use OWIN. This is useful so that you can run a SignalR Hub as a Windows Service, same goes for Web API.
Any other such usages of keeping an OWIN implementation in the middle?
Platform Independence. Having OWIN in the middle means I can literally xcopy my MVC 6 Core web application from running on IIS to Kestrel on my Mac, and the OWIN implementation takes care of the rest.
In my startup class's Configuration method I tried to chain simple middleware code snippets as below and to be able to see the request being sent in.
context.Request does not have an indexer in OWIN. Use Get<> instead:
app.Use(async (context, next) =>
{
context.Response.Write("hello world 2: " + context.Request.Get<object>("owin.RequestBody"));
await next();
});
Note that owin.RequestBody is a bit of an implementation detail, the actual return type is internal. I'm not sure what you are attempting to get, if you want a query string, use Query from the request, or Headers if you want an HTTP header.
What are the various kinds of middle ware that you have plugged-in in your projects between the web server and application?
Things for handling security, like a middleware component that handled nonces in Content Security Policy, which I wrote about on my personal blog here. The gist of it was it allows me to add an HTTP header with a nonce:
public void Configuration(IAppBuilder app)
{
app.Use((context, next) =>
{
var rng = new RNGCryptoServiceProvider();
var nonceBytes = new byte[16];
rng.GetBytes(nonceBytes);
var nonce = Convert.ToBase64String(nonceBytes);
context.Set("ScriptNonce", nonce);
context.Response.Headers.Add("Content-Security-Policy",
new[] {string.Format("script-src 'self' 'nonce-{0}'", nonce)});
return next();
});
//Other configuration...
}
From there, in my Razor views I could add the nonce to <script> elements get getting the token from the owin context.
There are lots of other things it can be used for. Other frameworks can easily inject themselves into the request / response process now. The NancyFx framework can use OWIN now.
I am developing chess playing server based on Java and Netty and client-application using C++.
Messaging process between client and server is based on Google Protobuf Protocol
Now I want website to act as client for application server so that it would be tightly integrated with server app
I have chosen Play 2.1(JAVA) framework for the website
1)
First i ran into trouble trying to run my Netty server instance from Play 2.1 application so I added next code to Global.java (Play framework startup file )
public void onStart(Application app) {
// ...
new Thread() {
public void run() {
new NettyServer().run();
}
}.start();
}
Does it seem to be a good idea to run my own instance of Netty this way?
2)
I am not sure how to validate data, as app server gets data to be validated from both C++ client and website by different protocols
Client sends it as binary-encoded data using protobuf protocol and website sends POST requestI want validation to be equal for both clients
For validating data sent from the website I can use Form < T > helper thought i can't use it for binary encoded protobuf data. Any ideas on how to manage validation?
3)
I use Messages.get() from i18n Play module to translate messages to user's language. Client using browser, Play determines user's language from client request headers, and chooses appropriate translations file.
But what about my client? I don't know anything about user's language so i can't send it to my app.
Moreover i didn't manage to find a way to set language manually in Messages.get()
I need to store some information in session(or in whatever in ASP.NET Web API) that I need to retrieve in each API request. We will have one api IIS web site and multiple web site binding will be added through host header. When any request comes in for example, api.xyz.com, host header will be checked and store that website information in session that will be used in each subsequent api request when making a call to database. Hope this is clear.
I found a way to handle session in ASP.NET Web API. ASP.NET Web API session or something?.
I know lot more about asp.net web forms where we can override PreRequestHandler. I am looking for similar in ASP.NET Web API where I can have my logic to get database id for api domain(for example, api.xyz.com) and store it in session which I want to access in each API GET/POST request.
Somebody will definitely say by adding session I am making it stateful but REST is stateless. But I wanted to save database trip for each api request. If I don't use session or something similar, I end up repeating the same logic for each api request.
Is there a better way to handle this situation? how?
thanks.
If that logic needs to happen for all requests, you better use an Implementation of delegating handlers.
Haven't seen many Geneva related questions yet, I have posted this question in the Geneva Forum as well...
I'm working on a scenario where we have a win forms app with a wide installbase, which will be issuing frequent calls to various services hosted by us centrally throughout it's operation.
The services are all using the Geneva Framework and all clients are expected to call our STS first to be issued with a token to allow access to the services.
Out of the box, using the ws2007FederationHttpBinding, the app can be configured to retrieve a token from the STS before each service call, but obviously this is not the most efficient way as we're almost duplicating the effort of calling the services.
Alternatively, I have implemented the code required to retrieve the token "manually" from the app, and then pass the same pre-retrieved token when calling operations on the services (based on the WSTrustClient sample and helpon the forum); that works well and so we do have a solution,but I believeit's not very elegant as it requires building the WCF channel in code, moving away from the wonderful WCF configuration.
I much prefer the ws2007FederationHttpBinding approach where by the client simply calls the service like any other WCF service, without knowing anything about Geneva, and the bindings takes care of the token exchange.
Then someone (Jon Simpson) gave me [what I think is] a great idea - add a service, hosted in the app itself to cache locally retrieved tokens.
The local cache service would implement the same contract as the STS; when receiveing a request it would check to see if a cahced token exists, and if so would return it, otherwise it would call the 'real' STS, retrive a new token, cache it and return it.
The client app could then still use ws2007FederationHttpBinding, but instead of having the STS as the issuer it would have the local cache;
This way I think we can achieve the best of both worlds - caching of tokens without the service-sepcific custom code; our cache should be able to handle tokens for all RPs.
I have created a very simple prototype to see if it works, and - somewhat not surprising unfortunately - I am slightly stuck -
My local service (currently a console app) gets the request, and - first time around - calls the STS to retrieve the token, caches it and succesfully returns it to the client which, subsequently, uses it to call the RP. all works well.
Second time around, however, my local cahce service tries to use the same token again, but the client side fails with a MessageSecurityException -
"Security processor was unable to find a security header in the message. This might be because the message is an unsecured fault or because there is a binding mismatch between the communicating parties. This can occur if the service is configured for security and the client is not using security."
Is there something preventing the same token to be used more than once? I doubt it because when I reused the token as per the WSTrustClient sample it worked well; what am I missing? is my idea possible? a good one?
Here's the (very basic, at this stage) main code bits of the local cache -
static LocalTokenCache.STS.Trust13IssueResponse cachedResponse = null;
public LocalTokenCache.STS.Trust13IssueResponse Trust13Issue(LocalTokenCache.STS.Trust13IssueRequest request)
{
if (TokenCache.cachedResponse == null)
{
Console.WriteLine("cached token not found, calling STS");
//create proxy for real STS
STS.WSTrust13SyncClient sts = new LocalTokenCache.STS.WSTrust13SyncClient();
//set credentials for sts
sts.ClientCredentials.UserName.UserName = "Yossi";
sts.ClientCredentials.UserName.Password = "p#ssw0rd";
//call issue on real sts
STS.RequestSecurityTokenResponseCollectionType stsResponse = sts.Trust13Issue(request.RequestSecurityToken);
//create result object - this is a container type for the response returned and is what we need to return;
TokenCache.cachedResponse = new LocalTokenCache.STS.Trust13IssueResponse();
//assign sts response to return value...
TokenCache.cachedResponse.RequestSecurityTokenResponseCollection = stsResponse;
}
else
{
}
//...and reutn
return TokenCache.cachedResponse;
This is almost embarrassing, but thanks to Dominick Baier on the forum I no now realise I've missed a huge point (I knew it didn't make sense! honestly! :-) ) -
A token gets retrieved once per service proxy, assuming it hadn't expired, and so all I needed to do is to reuse the same proxy, which I planned to do anyway, but, rather stupidly, didn't on my prototype.
In addition - I found a very interesting sample on the MSDN WCF samples - Durable Issued Token Provider, which, if I understand it correctly, uses a custom endpoint behaviour on the client side to implement token caching, which is very elegant.
I will still look at this approach as we have several services and so we could achieve even more efficiency by re-using the same token between their proxies.
So - two solutions, pretty much infornt of my eyes; hope my stupidity helps someone at some point!
I've provided a complete sample for caching the token here: http://blogs.technet.com/b/meamcs/archive/2011/11/20/caching-sts-security-token-with-an-active-web-client.aspx