How to Verify server to server communication - go

I'm having a few problems trying to decide what would be the best solution for something I'm trying to build.
In the applications simplest form, I have a front end server which allows users to upload files which become associated with their account, for example a video or image. The upload file form posts the upload request to the front end server, which then uses a reverse proxy to pass the request directly along to a storage server's API (https://www.example.com/users/username/upload).
What I'm currently stuck on, is trying to work out what the best way to verify that the request being received at the storage servers API is actually being sent from the reverse proxy from the front end server, as opposed to somebody just sending a direct post request to the storage server's API endpoint.
Any suggestions would be really appreciated!

There are multiple ways to do it:
you can use a API Gateway (e.g. APIGEE, AWS AI Gateway etc). Gateway can do request origin validation.
You can let front end app to use OAuth (for storage server) and use
that to get authenticated/authorized at storage server
You can do IP whitelisting between servers & allow a restricted set of IPs in source
You can use MASSL (Mutual Authenthicated SSL) b/w servers to make sure only clients which are verified access your API (may be not for your problem directly but can be used with combination)
These are the simple options if you don't need a complicated or more expensive solution.

Related

Backendless.com Business Logic - Making an HTTP Request to Google Places API

I am using Backendless.com as a BAAS for my application. I have some custom logic running on their servers which need to make an HTTP request to the Google Places API.
I'm trying to generate an API key for the Backendless.com server to run this request but i'm not sure what API key I need to generate. The Google developer console gives me 4 options. Server Key, Browser Key, Android Key, & iOS Key.
Server key seems to be the one I want to use... but I need to provide it with some IP addresses... I don't know where or how to find those! The console states that they are optional, but it seems insecure to not add the server IP address. What are the risks? Where can I find Backendless.com app server IP's?
Server key is what you want. Restricting access is a good additional security step to take, it is not however required. They basically make it so that if someone manages to steal your API Key, they can't use it from IPs that are not whitelisted. You will have to ask backendless.com if they have a finite list of IPs they can gurentee your requests will come from.

Serve private mapping from S3 tiles by proxying data or signing urls through heroku?

I want to store mapping tiles in a private S3 bucket. Each tile has its own URL and each set of tiles could potentially have GBs of tiles.
I then want to visualise these tiles through a front end mapping client (e.g leaflet). This client pulls tiles as it needs them using the tile's individual URL.
Because the bucket is private I need to authenticate each tile request but performance is fairly critical for this application.
Given that I want to use heroku to host my site, is it better to proxy the url through heroku and get it signed before requesting the tile from S3 or proxy the tile itself through heroku?
Are there any other options?
If the content in S3 is private, you are going to have to authorize the download one way or another, unless the bucket policy allows the proxy to access the content without authentication based on its IP address. Even then, the proxy still needs to verify that the user is authorized via (presumably) a cookie, which might mean a session database lookup.
Generating a signed URL is not a particularly expensive process, computationally, and (contrary to the impression I occasionally encounter) the signing process is done entirely on your server -- there's no actual interaction with S3 that occurs when generating a signed URL.
There's not really a single correct answer. I use both approaches, and a combination of them -- signing URLs in the application, signing them in the database (I have written a MySQL stored function that signs URLs), providing a link to a different app server that reads the user's session cookie and, if authorized, generates a signed URL and returns a 302 redirect, providing a link to a proxy server that proxies pre-signed URL requests to S3 (for real-time logging and to allow me to use my own domain name and SSL cert)... there are valid use cases for all of these approaches, and others.
Ideally I think you want to proxy the requests through a server that is authorized to access the S3 bucket to minimize authentication transactions.
Whether it's on Heroku or not, as long as the proxy server is able to authenticate the end user's access and maintain that session according to the required security policies you should be fine.
Cesium does support Proxies for Imagery and Terrain so once that is in place you should just have to configure the CesiumProxy with your server and be good to go.

Web app authentication and securing a separate web API (elasticsearch and kibana)

I have developed a web app that does its own user authentication and session management. I keep some data in Elasticsearch and now want to access it with Kibana.
Elasticsearch offers a RESTful web API without any authentication and Kibana is a purely browser side Javascript application that accesses Elasticsearch by direct AJAX calls. That is, there is no "Kibana server", just static HTML and Javascript.
My question is: How do I best implement common user sign on between the existing web app and Elasticsearch?
I am interested in specific Elasticsearch/Kibana solutions, but also in generic designs for single sign on to web apps and the external web APIs they use.
It seems the recommended way to secure Elasticsearch/Kibana is to have an Apache or Nginx reverse proxy in front that does SSL termination and user authentication (Basic auth). However, this doesn't play too well with the HTML form user authentication in my existing web app. Ideally I would like the user to sign on using the web app, and then be allowed direct access to the Elasticsearch API as well.
Solutions I've thought of so far:
Proxy everything in the web app: Have all calls go to the web app (server) which does the authentication, and have the web app issue the same request to the Elasticsearch web API and forward the response back to the browser.
Have the web app (server) store session info that Apache or Nginx somehow can look up and use to authorize access to the reverse proxy.
Ditch web app sign on and use basic auth for everything.
Note that this is a single installation, so I don't really need any federated SSO solutions.
My feeling is that the proxy within web app (#1) is a common generic solution, but it seems a bit heavyweight to have everything pass through the possibly slow web app, considering that Kibana uses the Elasticsearch API directly.
I haven't found an out of the box solution designed for the proxy authentication setup (#2). My idea is to have the web app store session info in memcache or the like and use some facility in the web server (Apache or Nginx) to look up the session based on a cookie and allow proxy access if authenticated.
The issue seems similar to serving static files directly using the web server (Apache or Nginx) while authenticating using a slow web app. Recommendations I've found for that are however very specific to that issue, like X-Sendfile.
You could use a sessionToken. This is a quite generic solution. Let me explain this. When the user logs in, you store a random string an pass him back to him. Each time the user tries to interact with your api you ask for the session Token you gave him. If it matches, you provide the service he is asking for, else, you just ignore his call. You should make session token expire in a certain interval of time and make a new one each time the user logs back in.
Hope this helps you.

CalDav server proxy

Basically I want a lightweight CalDav server proxy, which passes the username, password and calendar name to a script and it will respond with either invalid user/pass, no such calendar or return the calendar.
The CalDav server would then return the appropriate response back to the server.
I will only have the calendars of the users stored locally on the server for caching purposes as I don't directly access to the users calendars. My script will try to login to an external site (out of my control in any way) and fetch the calendar by crawling the site.
If possible I would prefer if the server has wsgi support for communicating with my script.
I think your best bet here is to use sabre/dav and write a custom backend for it. As an example, at a company I used to work for I wrote a MongoDB backend for SabreDAV as well as getting the list of calendars from the system it was connected to. This is very similar to your use case, therefore check out this repository. You can find the backend implementation here and will need a lot of the other code to make the calendar listings work.
I would advise to do some caching and not scrape the remote site on each request, since caldav in connection with webdav-sync will want to provide updates since the last time the client synchronized, and that will be harder to do if you are scraping in the moment.

Only allow access to my REST APIs from my own application?

We have a Windows app hosting a WebBrowser control that hits our REST APIs. We like to restrict access to the APIs to be only coming from withing the Windows app itself (for example, the APIs cannot be accessed in a browser, etc).
How can we accomplish that? what is the most secure way without having to expose any kind of credential (for example, if we use HTTP Basic auth, the username and password can be seen by reverse engineering the app itself)?
Thanks a bunch!
EDIT: We plan to distribute the application freely so we have no control over where the connection will be made from.
Restrict the REST interface to only accept connections from 127.0.0.1 (home) and then connect from your rest-consuming application only with http://localhost or http://127.0.0.1 in the URLs (if you use the external IP or DNS name of your machine it'll be treated as a remote connection and denied access).
You can do this with web server settings, or within the code of your REST APIs
I had a similar situation during a project where we distributed an iPhone app that also connected to a REST api that my team developed.
For security we used somewhat of a three-legged scenario. The app was required to authenticate using the user's credentials against a standalone service responsible only for authenticating and generating access tokens. Once the app received a valid access token, subsequent requests to the api required sending this token in the Authorization header.
You could do something similar. If you come up with a credential scheme to authenticate your app as valid API consumers you could use basic auth over HTTPS to obtain tokens, and then only by using those tokens could a consumer gain access to the rest of the API.

Resources