Since Parse is mainly about cross-platform management of data in the cloud and the security of data is managed through Class-Level and Object-Level (ACL) controls, one is told to pass SDK/REST keys to client-side confidently as long as the data security levels are properly set. For example, file uploads through Parse REST Api requires specific headers like
X-Parse-Application-Id
X-Parse-REST-API-Key
to https://api.parse.com/1/files/ endpoint. For we have exposed those credentials to the client-side, isn't it possible for anyone to abuse this endpoint to upload countless irrelevant files to the file storage on Parse platform on behalf of the application? Yes, one can secure the data by setting the security levels properly but the file storage? The file storage quota of an application can be exploited, can it not?
The main question is: do that file uploads count as an API request and uploaded files count before they're linked to any object in an application? If they do, isn't it insecure for exploitation?
Related
I am working on an app that has some social network elements: users can create posts with images and they can share these publicly or with friends.
I am now considering the security aspect of this. These images should only be available to the person that uploaded them and the people they select to view them.
From the posts I have seen it seems that one of the recommended ways is to expose an API endpoint through my backend service to control access through it (this way I can check a user's permissions) and then return the requested image but I feel that serving images this way would be quite expensive.
Are there any other approaches that do not sacrifice security but achieve a good performance?
In case it matters, I am using Spring Boot for my backend, Expo + React Native for my app and I am planning to store the images on AWS S3
It turns out S3 on AWS allows access to files through signed URLs, which means only people with the given signed URL can access the file. This signed URL can be further restricted by specifying the duration for which the signed URL will be valid.
Generating these URLs can be done by the back-end service without reaching out to AWS, so that does not create a big performance hit.
I am currently working on a Vue JS + Vuetify + Axios + Laravel architecture where I am making a dashboard. Currently I am working on the user profile where they can upload a picture for their avatar but also can upload their business licence (via a different uploader).
User need to be able to modify update those documents later on.
What is the best strategy to implement this requirement nicely and with proper security ?
Store the files in a private area of Laravel or a public one after renaming it with a random + user name?
Store the file as a blob in mysql directly and retrieving ?
Store the path of the file in mysql only while storing the file in a public/private folder under Laravel tree ?
For authentication I plan to use jwt and websanova.
Where you store the avatar depends on where it needs to be displayed. Will it be shown only to that user? Other logged in users? Non-authenticated users?
Regarding the user's business licence, I would store that in a folder that's not publicly accessible and access it via an API endpoint. This way you can implement the necessary security rules via your Laravel controller.
Generally speaking, I'd avoid storing files in a DB. You're bloating the size of the DB, which impacts on doing backups/restores, among other things. Having files stored on the file system also makes it easier to move to cloud storage (such as Amazon S3) at some point, if you need to scale your app.
So I can successfully generate a temporary signed url on google cloud storage, with an expiry time etc.
However the signed URL still has the clearly visible bucket name and file name.
Now I understand that when the download occurs, the filename will be visible.. as we have downloaded that file. However it would be nice to obscure the bucket and filename in the URL?
Is this possible, the documentation does not give any clues and a google search session has not really given any results that help.
I don't think there's a way. Bucket naming best practices basically state that bucket and object names are "public", that you should avoid using sensitive information as part of those names, and advice you to use random names if you are concerned about name guessing/enumeration.
A possible workaround for this would be to proxy the "get" for the Cloud Storage objects using Cloud Functions or App Engine, so the app retrieves the objects from Cloud Storage and then send it to the client.
This is more costly, and would require to write more code.
I can think on another possible workaround which consists in protect your Signed URL by code (such as PHP), so that users cannot know what the URL is. Nevertheless, taking in account that you want to avoid any displayed-data on the network activity when downloading, you should test this workaround first to see if this works as intended.
I want to supply my users a Dropbox access token trough my Parse server.
For the one who don't know, Dropbox access token is a string that supplies direct access to a dropbox account files, it should be secret, because if anyone finds it he can delete all the files.
My server should store many access tokens and it should supply the user the correct token, but the problem is that because the anonymous log in i'm afraid that if someone will know the parse server key, he could get all the secret dropbox access tokens.
In first place i supply the access tokens in server for security reasons and not put it hard coded to protect it.
But what's the difference if i put the parse key hard coded?
Is there a way to handle this?
thanks.
Yes you are correct. If somebody knows your ApiKey he can query your parse server without any problem unless you use ACL
ACL is access control list which allows you to decide (on the application level) which users/roles can read or write to one or more parse objects or parse users. In runtime Parse will check if the logged in user has an access to read or write the object and only if it will have an access it will return the results to the client.
So i suggest you to protect your users/tokens with ACL's if you like to protect only the access tokens then i suggest you to create a separate class that will store the user access token and in this class you need to create an ACL for the relevant user only.
You can read more about ACL's in here:
iOS SDK
Android SDK
JavaScript SDK
I'm currently working on a rather interesting... project. I have a client who wants to allow form uploads (from a page presented on their server) specifically to their own google drive account. The platform being used is essentially LAMP.
Single (pre-authenticated) google drive account. Multiple otherwise anonymous upload sources (users).
They do not want users to be required to have their own google accounts (rules out simply using Picker on the user's own drive files).
They want some degree of backwards browser compatibility, such as IE8 (rules out XHR to form the post using HTML5's file API to read the filedata). They don't want to use flash/etc due to potential compatibility issues with certain mobile browsers.
What is working:
Authenticating (getting a refresh token, storing, using it to get access tokens as needed)
Uploading a file to the account without metadata
Result of file upload being sent to hidden iframe
Catching the iframe load event via jquery to at least know something has happened
Problems:
The REST API upload endpoint does not support CORS: there is no way to access the result iframe directly. (see: Authorization of Google Drive using JavaScript)
The return from a successful upload is only raw JSON, not JSONP.
There is seemingly no way to host anything with proper headers to open via browser on the googleapis.com domain, so easyXDM and similar multi-iframe with cross origin workaround communication javascript approaches are ruled out.
There is no way to embed a callback URL in the POST from the submit, the API does not allow for it.
The Picker displays errors on trying to upload if you pass it an Oauth2 token that is not for a user who is also authenticated in their browser (assumedly via cookie). Strangely enough you can show files from the Oauth2 token's matching account, but other than in a browser instance where the target Oauth2 token's account matches the already logged in user any file uploads fail with an ambiguous "Server rejected" message. This happens with all files and file types, including files working in an authenticated browser instance. I assume it's an authentication flow/scope issue of some sort. I haven't tried diving the Picker source.
All of the javascript Google Drive API upload examples seem to rely on using HTML 5 to get the file data, so anything of that nature seems to be ruled out.
While files are uploaded, there's no way other than guesstimating which file came from which user, since we can't get the file object ID from the result in our inaccessible iframe. At best we could make a very rough time based guess, but this is a terrible idea in case of concurrency issues.
We can't set the file name or any other identifier for the file (not even a unique folder) because the REST API relies on that metadata being sent via JSON in the post request body, not via form fields. So we end up with file objects in the drive with no names/etc.
We can't create the file with metadata populated server side (or via jquery/XHR, or the google javascript API client) and then update it with a form based upload because the update API endpoint exclusively works with PUT (tested).
We can't upload the files to our local server and then send them to google (proxy them) as the php ini is locked down to prevent larger file uploads (and back to restrictions imposed on using HTML5 or flash for why we can't chunk files/etc).
All of this has been both researched and to varying degrees tried.
At the moment this is going on hold (at least it was a useful way to learn the API and gain a sense of its limitations) and I'm just going to implement something similar on dropbox, but if anyone has any useful input it would be lovely!
e.g. is there any way to get this working with Drive? Have I overlooked something?
I also realize that this is probably essentially a less than intended use-case, so I'm not expecting miracles. I realize that the ideal flow would be to simply allow users to upload if necessary to their own google drives and then have them grant file access to our web app (via Picker or API+our own UI), but this becomes a problem when not all of our own users are necessarily already google account users. I know that google would OBVIOUSLY prefer we get even more people to sign up with them in order to have this happen, but making people sign up for a google account to use our app was ruled out (not out of any prejudice on our part, it was just too many added steps and potential user hurdles). Even simply having them sign in to google if they did have accounts was deemed unwanted for the basic LCD feature functionality, although it's likely to be added as an additional option on top of whatever becomes the base solution.
The biggest problem with the approach you described is you're introducing a big security issue. Allowing an anonymous user to directly upload to Drive from the client requires leaking a shared access token to anyone who comes by. Even with the limited drive.file scope, a malicious or even slightly curious user would be able to list, access (read/update/delete!) any file that was uploaded by that app.
Of course a public drop box feature is still useful, but you really need to proxy those requests to avoid revealing the access token. If your PHP environment is too restrictive, why not run the proxy elsewhere? You can host a simple proxy to handle the uploading just about anywhere -- app engine, heroku, etc. and support whatever features you need to ensure the metadata is set correctly for your app.