How to maintain session in swagger editor - session

I am using swagger for building API documentation. I am new to it. What I am doing is logging in first and the giving call to a list API. But as the session, cookies are not maintained, the list call is not getting executed. You need to first login to the application and then give call to the list.
How can I do it in swagger editor http://editor.swagger.io/#/
Thankyou

I just spent some time struggling with this same question and as far as I can tell the Swagger Editor will not make API calls with xhr.withCredentials = true, which means that your browser will not send cookies even if the server's CORS policy allows it. There's been some discussion of updating Swagger Editor to allow an option to do this (e.g., https://github.com/swagger-api/swagger-js/issues/251), but it doesn't appear this has been done quite yet.
Your best option is probably to avoid the entire cross-origin security issue by hosting Swagger Editor on your own domain. This is one of the suggestions: https://github.com/swagger-api/swagger-editor/blob/master/docs/cors.md#host-swagger-editor-in-your-own-domain. Just run it as if it was one of your own apps on your dev site.
Another suggestion is to just disable Chrome web security: https://github.com/swagger-api/swagger-editor/blob/master/docs/cors.md#run-swagger-editor-in-a-browser-that-ignores-http-access-control. I haven't tried this but it may not work because Swagger Editor is still not setting the XHR request appropriately.
Finally, if you're familiar with docker, you can run a Swagger Editor docker instance and link everything up. This is complicated but I accomplished this by:
running our backend inside a docker container
running a Swagger Editor instance inside another docker container on the same network (https://hub.docker.com/r/swaggerapi/swagger-editor/)
running an nginx reverse proxy server inside a third docker container such that any requests to, e.g., http://localhost/dev/swagger-editor/ are proxy-forwarded to the swagger docker container and any requests to http://localhost/api/ are proxy-forwarded to your backend. This way the browser only sees requests to localhost and everything works just fine.
But you need to be willing to get into docker and nginx reverse proxy configs before having a reasonable expectation you'll be successful with this. Otherwise this could be another rabbit hole. Good luck!

Related

How to deploy a js web app that fetches data from an api

How can I deploy a js web application that uses an API.
I have hosted it on netlify but it doesn't fetch the data.
Everything works fine on localhost.
Link: hiuhu-theatre.netlify.app
In firefox you can see the request the function getMovies made was blocked, the console shows the reason, it links to this URL.
Basically you're trying to use http protocol for that request when you're over https in your website.
To fix that simply change your "http://www.omdbapi.com/” to start with "https://" instead.
Also, if you can, do not add API key to client side code, if you do so anyone can steal it and use it themselves (and that might make you pay more for the service or reach the limit you have really quick), instead do a request to your back-end server so it fetches the data while hiding the API key.
It works in local because you're using http in local aswell.
I've overrided the getMovies function in my browser to use https and it worked nicely

Forwarding HTTP headers using Juniper

I'm working with a sysadmin that uses a Juniper solution that behaves as a proxy. I have no idea what it is, but here's a picture of the web interface: http://imagebin.ca/v/1UKN1jGYPUWd
Through that proxy, I'm trying to use Sharepoint's REST API, unfortunately there are some headers (such as X-RequestDigest) that Juniper's proxy doesn't forward to Sharepoint.
Basically, I need the equivalent of nginx's proxy_pass_request_headers for Junipers' applications.
The sysadmin doesn't seem to know what HTTP header forwarding is, or how to configure it. Can anyone identify the solution he's using from the picture ? Does anyone know where to find documentation about this ?
Further to my comment added above, there appears to be no way to implicitly pass variables around. You can tell the current IVEOS images that the Web URL you're linking to is a Sharepoint Site, and it'll do "clever" things with the URL, but I'm not exactly sure what you want it to do, and whether they'll handle it.
Here are the screen shots for the "Sharepoint" configuration panels on the Web Resources page. As I'm not a Sharepoint Admin, I can't tell you whether these are useful to you or not.
I hope it helps!
You may be looking for the Web Resource custom header policy
https://www.juniper.net/documentation/en_US/sa8.0/topics/task/operational/secure-access-web-rewrite-custom-header-policy.html
Edit: The first resource became a dead link. New link: https://www.juniper.net/techpubs/en_US/nsm2012.2/topics/task/configuration/remote-management-secure-web-resource-policy-configuring-nsm.html
Fur custom headers (to send some user information) we've used the "Web Rewriting Resource Policy"
SSO Cookies/Headers > General tab -> Headers and Values
to pass custom user data (user name, role, client certificate).
I assume you have the backend application (sharepoint) configured as the a PTP (PassthroughProxy) we bresource. I am pretty confident that only standard HTTP headers are passed to the backend by default :(
To pass all custom headers I found following book (Juniper(r) Networks Secure Access SSL VPN Configuration Guide): https://books.google.be/books?id=5OYf6u5vzFsC&pg=PA369&lpg=PA369&dq=Juniper+pass+custom+headers&source=bl&ots=s5oF5NEKjP&sig=8091EV2Pyw6pIFQifMOIR2pLpLk&hl=de&sa=X&ved=0ahUKEwiFwpf6m_DOAhWFWRQKHXoRD0EQ6AEIPDAE
where it says
Passing custom headers can be enabled by:
Users > Resource Polities > Web > Custom Headers
This option may not be visible on the admin interface by default, it needs to be enabled:
Users > Resource Policy > Web > Web ACL and there's a "Customize" button

Can nginx be configured to allow a path like /api to pass through, and add a header to the request

I am using NGINX as my web server for html/js/css files and my web app UI. It is a single page app that uses AJAX requests to a back end JEtty server. Previously I deployed everything in Jetty and ajax calls worked fine. In separating the back end from the web UI tier, I am now trying to figure out how to configure NGINX to allow AJAX requests to pass through to Jetty. But, I ALSO want to prevent someone from watching network traffic and seeing the ajax calls my app makes, then scripting those themselves. To do this, I believe if I can configure nginx to ADD a custom header to the requests as they pass through (is this even possible?) I could then only accept requests with those headers at my Jetty API level.
If that is possible, is it the right way to handle this so that outsiders can't get in to my back end API? Is there a way they could figure out that my nginx server is adding a header short of breaking in to my server and figuring out the configuration?
If your application calls your api via Ajax on the client there's nothing you can do to stop someone from calling it directly (assuming they otherwise have access to the page). At the end of the day, an Ajax request is just a request made from the client in JS. Now, there are lots of stupid ways to make it more difficult, but, if anyone really wants to call your api directly, they can.
If you're just talking about only allowing access through nginx (or specifically your /api location block), just bind jetty to localhost only.

Azure and CORS Access-Control-Allow-Origin with ajax and php

First I'm not in the web side of our world, so be nice with the backend guy.
A quick background : For a personal need I've developped a google chrome extension. They are basically a webpage loaded in a chrome windows and... yeah that's it. Everything is on the client side (scripts, styles, images, etc...) Only the data are coming from a server through ajax calls. A cron job call a php script every hours to generate two files. One, data.json contains the "latest" datas in a json format. Another one hash.json contain the hash of the data. The client chrome application use local storage. If the remote hash differ from the local one, he simply retrieve the data file from the remote server.
As I have a BizSpark account with Azure my first idea was : Azure Web Site with php for the script, a simple homepage and the generated file and the Azure Scheduler for the jobs.
I've developed everything locally and everything is running fine... but once on the azure plateform I get this error
XMLHttpRequest cannot load http://tso-mc-ws.azurewebsites.net/Core/hash.json. No 'Access-Control-Allow-Origin' header is present on the requested resource. Origin 'http://localhost:23415' is therefore not allowed access.
But what I really can't understand is that I'm able (and you'll be too) to get the file with my browser... So I just don't get it... I've also tried based on some post I've found on SO and other site to manipulate the config, add extra headers, nothing seems to be working...
Any idea ?
But what I really can't understand is that I'm able (and you'll be
too) to get the file with my browser... So I just don't get it
So when you type in http://tso-mc-ws.azurewebsites.net/Core/hash.json in your browser's address bar, it is not a cross-domain request. However when you make an AJAX request from an application which is running in a different domain (http://localhost:23415 in your case), that's a cross-domain request and because CORS is not enabled on your website, you get the error.
As far as enabling CORS is concerned, please take a look at this thread: HTTP OPTIONS request on Azure Websites fails due to CORS. I've never worked with PHP/Azure Websites so I may be wrong with this link but hopefully it should point you in the right direction.
Ok, will perhap's be little troll answer but not my point (I'm .net consultant so... nothing against MS).
I pick a linux azure virtual machine, installed apache and php, configure apache, set some rights and define the header for the CROS and configure a cron in +/- 30minutes... As my goal is to get it running the problem is solved, it's running.

Cross Domain request for service using SproutCore

I have been trying to get this resolved, without any success.
I have a webapp residing on my domain, say www.myDomain.com. I need to call a service which is present on another domain, say www.anotherDomain.com/service.do?
I'm using SproutCore's SC.Request.getUrl(www.anotherDomain.com/service.do?) to call that service.
I get an error that says, Origin www.myDomain.com is not allowed by access-control-allow-origin.
When I was in dev stages, and using sc-server, the issue was resolved using proxies. Now that I have deployed the app to an actual server, I replaced all the lines where I had set up the proxy with the actual domain name. I have started getting that error again.
The problem is that I CANNOT MAKE ANY CHANGES to the server on the other domain. All the posts that I have come across state that the other server on the other domain ought to provide access-control-allow-origin header and that it ought to support the OPTIONS verb.
My question is, is it possible for me to connect to that service using SproutCore's SC.Request.getUrl() method?
Additionally, the other posts that I have read mentioned that a simple GET request ought not to be preflighted. Why then are my requests going as OPTION instead of GET?
Thanks a ton in advance! :D
This is not a Sproutcore issue; it's a javascript Same Origin Policy issue.
If you can't modify the production server, you have no option but to develop your own proxy server, and have your proxy hit the real service.
This is effectively replacing sc-server in your production environment.
All this server would do is take the incoming request and pass it along to www.anotherDomain.com/?service.do.
You would need to make sure you passed all parameters, cookies, headers, the http verb, etc....
This is far from ideal, because now errors can occur in more places. Did the real service fail? Did the proxy fail? etc.
If you could modify the other domain, you could
1) deploy your SC app there.
2) put in the CORS headers so you could make cross domain requests

Resources