WebForm :
In webform (with session) state application in web farm environment the session is stored in SQL Server storage which can be accessed by all the servers in a webfarm. This means the logged in user's request can get the same session regardless of which server in the farm it hits.
WebAPI :
I understand that webapi by design is stateless so for true webapi application I dont need to worry about how the state is maintained etc. Usually the authentication token is passed between requests and as far as its valid a login gets access to the whatever resource it needs on the server. This is fine with one Webserver hosting webapi. But what about web farm. How does the "Session" (Or the equivalent term in Webapi) is managed in WebApi farm?
I know azure gives following options to name the few..
Azure SQL Server
Azure Table storage /Queue
Cache Service
They seems to add extra complexity to the architecture (which is much easier in WebForm using SQL Server Session).
One other slightly different question (and might a bit basic) is how does the request/response is traced in webapi farm? i.e. When a client make request to webapi and webapi sends a async response how does server make sure its traced back to the client?
Edit:
I am not looking to implement Session is Webapi but rather how the same thing can be achieved in webapi without session state.
Thanks
Yes. Using session in REST is a really bad thing.
But you can simulate the session in Web API as well. In web form, the server added the generated session id in HTML from so when user submit that session id will be included in their request. You can simulate the same thing with Web API as well. You can check this Accessing Session Using ASP.NET Web API
The best that you can store the information in memcached or redis with expiration. It will work in web farm as well.
Anyways, I don't recommend using the "session" in Web API.
Related
We have this architecture:
Web Server: Web Application is deployed (html, javascript, css)
Application Server: WebApi is deployed
Problem is , I cannot make ajax request to reach Application Server because its behind firewall.
The Web Application is supposed to be used publicly to the internet users.
What changes should we do to make it work?
Should we move our Web Application to Application Server? But how would this be accessible on internet.
Thanks in advance for suggestions/advice.
You're going to have to put an exception in the firewall for the address of your web server... that way your web server can access the API but nothing else can (well, not quite nothing else - other stuff on that web server can but that can easily be solved by having your web app hosted on it's own/dedicated web server).
If your Web Application makes direct calls to the Web API endpoint (e.g. is a single page application that use a client-side javascript framework like AngularJS and/or it uses AJAX calls to your application server address), there is no way for your clients to access your API if you do not allow public access to your application server.
That's because your client resides inside your users web browsers.
You have to allow incoming connections to your Application Server through internet in your firewall.
Well, it all depends on how you look at things and how distributed your application should be (criteria like load, security).
In general, Web API might be just one more client (from your applications server perspective).
On the other hand, in robust/distributed system, you would have Web API only as an endpoint (controllers, mappers and things like that) that your mobile/ajax clients send requests to and then Web API communicates to Application server (where your business logic is).
Having Web API communicate to DB directly is not a good idea because as you add clients to application server (mvc, web api, services, etc...) then you have as many db access points as you have clients. So, its a code maintenance problem plus a problem of your view tier being aware of DB.
Ideally, you need Application server as a tier where all your business logic is and its the one that all your clients target (mvc web app, web api, desktop, services, etc...) and that is the one that should communicate to your DAL. Also, then you can set firewall rules on your application server to allow incoming traffic from trusted sources (your other servers) instead from the whole internet (ajax).
I am setting up an API for a mobile app (and down the line a website). I want to use oAuth 2.0 for authentication of the mobile client. To optimize my server setup, I wanted to setup an oAuth server (Lumen) separate from the API server (Laravel). Also, my db also lives on its own separate server.
My question is, if using separate servers and a package like lucadegasperi/oauth2-server-laravel do I need to have the package running on both server?
I am assuming this would be the case because the oAuth server will handle all of the authentication to get the access token and refresh access token functions. But then the API server will need to check the access token on protected endpoints.
Am I correct with the above assumptions? I have read so many different people recommending the oAuth server be separate from the API server, but I can't find any tutorials about how the multi-server dynamic works.
BONUS: I am migrating my DB from my API server, so I would assume I would need the oAuth packages migrations to be run from the API server also. Correct?
I have developed a web app that does its own user authentication and session management. I keep some data in Elasticsearch and now want to access it with Kibana.
Elasticsearch offers a RESTful web API without any authentication and Kibana is a purely browser side Javascript application that accesses Elasticsearch by direct AJAX calls. That is, there is no "Kibana server", just static HTML and Javascript.
My question is: How do I best implement common user sign on between the existing web app and Elasticsearch?
I am interested in specific Elasticsearch/Kibana solutions, but also in generic designs for single sign on to web apps and the external web APIs they use.
It seems the recommended way to secure Elasticsearch/Kibana is to have an Apache or Nginx reverse proxy in front that does SSL termination and user authentication (Basic auth). However, this doesn't play too well with the HTML form user authentication in my existing web app. Ideally I would like the user to sign on using the web app, and then be allowed direct access to the Elasticsearch API as well.
Solutions I've thought of so far:
Proxy everything in the web app: Have all calls go to the web app (server) which does the authentication, and have the web app issue the same request to the Elasticsearch web API and forward the response back to the browser.
Have the web app (server) store session info that Apache or Nginx somehow can look up and use to authorize access to the reverse proxy.
Ditch web app sign on and use basic auth for everything.
Note that this is a single installation, so I don't really need any federated SSO solutions.
My feeling is that the proxy within web app (#1) is a common generic solution, but it seems a bit heavyweight to have everything pass through the possibly slow web app, considering that Kibana uses the Elasticsearch API directly.
I haven't found an out of the box solution designed for the proxy authentication setup (#2). My idea is to have the web app store session info in memcache or the like and use some facility in the web server (Apache or Nginx) to look up the session based on a cookie and allow proxy access if authenticated.
The issue seems similar to serving static files directly using the web server (Apache or Nginx) while authenticating using a slow web app. Recommendations I've found for that are however very specific to that issue, like X-Sendfile.
You could use a sessionToken. This is a quite generic solution. Let me explain this. When the user logs in, you store a random string an pass him back to him. Each time the user tries to interact with your api you ask for the session Token you gave him. If it matches, you provide the service he is asking for, else, you just ignore his call. You should make session token expire in a certain interval of time and make a new one each time the user logs back in.
Hope this helps you.
Firstly, I'm relatively new to Web API / CORS and security implementation.
This question is specifically with regards to security. The Web API houses extremely sensitive data and provides clients with the ability to execute transactions online.
The context :
I have a Web API self hosted as a windows service with a fixed port.
The Web API is sitting behind a firewall / DMZ on an internal network.
The Web API (using CORS) only allows traffic from the external server.
The external server hosts our web site using IIS.
The Web API is making use of Token authentication (bound to client IP to avoid hi-jacking).
Both the external website and internal Web API force the use of SSL.
The problem :
The web page makes ajax calls via javascript to the Web API. However, the Web API is not directly exposed to the internet.
What would the security impact be on having the below setup?
What sort of vulnerabilities would I be exposing my network too by doing so.
Is there a better way of implementing such a setup!?
Eg
User enters https://test.mydomain.com into the browser and is served a page.
ajax call gets made to https://test.mydomain.com/api/test/action
external server routes https://test.mydomain.com/api messages to internal server https://myInternalWebAPI/api/test/action which is not exposed to the public.
So this requires a little bit of leg work, but it's implemented into a production environment so I thought I'd share the solution.
I created a WCF service and a WebAPI.
The primary WCF Service resides on the internal network and contains all the business logic, database connectivity.
The proxy WebAPI mimics the WCF service structure and is exposed to the public.
The proxy WebAPI is called from the client (javascript), the proxy WebAPI then calls the internal server hosting the WCF service and voila, victory.
I am using a jQuery plug-in to create a cookie (https://github.com/carhartl/jquery-cookie) and have been allowing the cookie to default to a "session cookie". Which is exactly the behavior I would like to have. My concern is that when I deploy my web site to Production, that it will be in a web farm on that environment. Can anyone help me understand what kind of issues, if any, that I will run into with session cookies on a web farm? The version of IIS on the web farm is IIS 7.5.
No issues at all. Cookies are stored on the client. They don't know or care about your server side infrastructure and how many nodes you have.
There are 2 types of cookies:
Session cookies - live only in the memory of the webbrowser and do not survive browser restart.
Persistent cookies - stored as files on the file system for a specified duration and survive browser restarts.
From the perspective of the server it makes strictly no difference. The cookie will be sent by the client on each request and the node that is serving the request will receive this cookie.
If on the other hand you are storing some information in the memory of the web server, such as for example using ASP.NET Session with the default InProc state then you will have problems. But this has nothing to do with client side cookies.