I am wondering if what I'm doing is a good practice. Please advise. Thanks.
My web application server caches generated chart images for users to enhance performance.
The images are stored in session-based folders, where the folder name is generated.
Let's say user1 plotted a chart and is cached on the server here:
webapp\sessionFolder\aklfq13d10jd10\image.jpg
I disabled IIS7 directory browsing.
But I find that other users of the system, can access the image too, if they input the full url. But they're not supposed to see it as it is cached for user1.
How can I avoid such illegal accesses? Or is there a better practice to implement such web caching?
Thank you!
Kyeo
A better approach would be to cache images in a directory that is not accessible to the client (for example, a subdirectory of App_Data), then have a handler that streams the contents of files from this directory to authorized users.
If the files are specific to a user, you could for example store the images in folder names derived from the username:
App_Data\TempImages\User1
App_Data\TempImages\User2
Then the handler that streams the content will only stream files for the current logged on user, something like (modulo a bit of error handling):
string path = Path.Combine(
AppDomain.CurrentDomain.BaseDirectory,
"App_Data\TempImages",
HttpContext.Current.User.Identity.Name,
Request.QueryString["imageFileName"]);
... stream image at path if it exists ...
You could use the sessionId as an identifier instead of the username, but in this case the cached data will become inaccessible whenever the session times out.
Related
Stack is:
Angular
Laravel
S3
nginx
I'm using S3 to store confidential resources of my users. Bucket access is set to private which means I can access files either by creating temporary (signed, dynamic) links or by using Storage::disk('s3')->get('path/to/resource') method and returning an actual file as a response.
I'm looking for a way to cache resources in user's browser. I have tried to set cache headers to resource response directly on AWS, but since I'm creating temporary urls, they are dynamic and cache is not working in that case.
Any suggestion is highly appreciated.
EDIT: One thing that makes the whole problem even more complex is that security of resources should be intact. It means that I need a way to cache resources, but in the same time I must prevent users from copy-pasting links and using them outside of the app (sharing with others via direct links).
Temporary links in terms of security are still not an ideal solution, since they can be shared (and accessed multiple times) within the period of time they are valid for (in my case it's 30 seconds).
Caching will work as-is (based on Cache-Control, et al.) as long as the URL stays the same. So, if your application uses the same signed URL for awhile, you'll be fine.
The problem comes when you want to update an expiration date or something. This of course has different querystring parameters, and is effectively a different URL. You need a different caching key, but the browser has no concept of this by default.
If it is acceptable for your security, you can create a Service Worker which uses just the base URL (without querystring) as the cache key. Then, future requests for the same object on the bucket will be able to used the cached response, regardless of other URL parameters.
I must prevent users from copy-pasting links and using them outside of the app (sharing with others via direct links).
This part of your requirement is impossible, and unrelated to caching. Once that URL is signed, it can be used by others.
You have just add one parameter in your code.
'ResponseCacheControl' => 'no-store'
Storage::disk('s3')->getAwsTemporaryUrl(Storage::disk('s3')->getDriver()->getAdapter(), trim($mNameS3), \Carbon\Carbon::now()->addMinutes(config('app.aws_bucket_temp_url_time')), ['ResponseCacheControl' => 'no-store']);
I have developed a small web application that runs a web server in golang.
Each user can login, view the list of their docs (previously uploaded) and click on an item to view an html page that shows some fields of the document plus an tag with a src attribute
The src attribute includes an url like "mydocuments/download/123-456-789.pdf"
On the server side I handle the URL ("mydocuments/download/*") via an http Handler
mymux.HandleFunc(pat.Get("/mydocuments/download/:docname"), DocDownloadHandler)
where:
I check that the user has the rights to view the document in the url
Then I create a fileserver that obviously re-maps the url to the real path of the folder where the files are stored on the filesystem of the server
fileServer := http.StripPrefix("/mydocs/download/",http.FileServer(http.Dir("/the-real-path-to-documents-folder/user-specific-folder/)))
and of course I serve the files
fileServer.ServeHTTP(w, r)
IMPORTANT: the directory where the documents are stored is not the static-files directory I sue for the website but a directory where all files end after being uploaded by users.
My QUESTION
As I am trying to convert the code for it to work also on Google Cloud, I am trying to change the code so that files are stored in a bucket (or, better in "sub-directories" -as they do not properly exist- of a bucket).
How can I modify the code so to map the real document url as available via the cloud storage bucket?
Can I still use the http.FileServer technique above (if so what should I use instead of http.Dir to map the bucket "sub-folder" path where the documents are stored)?
I hope I was enough clear to explain my issue...
I apologise in advance for any unclear point...
Some options are:
Give the user direct access to the resource using a signed URL.
Write code to proxy the request to GCS.
Use http.FS with an fs.FS backed by GCS.
It's possible that a fs.FS for GCS already exists, but you may need to write one.
You can use http.FileSystem since it is an interface and can be implemented however you like.
I have an Angular webapp that uses a Spring Boot REST service as its backing web service.
I am adding a "Profiles" feature for users, and as part of this I want to stand up an endpoint that allows users to upload profile images for themselves and immediately upload those files to S3 (where I will host all the images from).
Looking at several Spring Boot/file upload tutorials :
http://www.mkyong.com/spring-boot/spring-boot-file-upload-example/
I update avatar image and display it but the avatar does not change in Spring Boot , why?
Many others
It seems that the standard way of handling such file upload is exposing a controller endpoint that accepts MultipartFiles like so:
#RestController
#RequestMapping("/v1/profiles")
public class ProfileController {
#PostMapping("/photo")
public ResponseEntity uploadProfilePhoto(#RequestParam("mpf") MultipartFile mpf)
// ...
}
Looking at all this code, I can't tell if the MultipartFile instance is in-memory or if Spring sets its location somewhere (perhaps under /tmp?) on the disk.
Looking at the AWS S3 Java SDK tutorial, it seems the standard way to upload a disk-based File is like so:
File file = new File(uploadFileName);
s3client.putObject(new PutObjectRequest(bucketName, keyName, file));
So it looks like I must have a File on disk in order to upload to S3.
I'm wondering if there is a way to keep everything in memory, or whether this is a bad idea and I should stick to disks/File instances!
Is there a way to keep the entire profile image (MultipartFile) in-mempory inside the controller method?
Is there a way to feed (maybe via serialization?!) a MultipartFile instance to S3's PutObjectRequest?
Or is this all a terrible idea (if so, why?!)?
Is there a way to keep the entire profile image (MultipartFile) in-mempory inside the controller method?
No, there is NO way to keep an image File in-memory because File object in java represents a path in file system.
Is there a way to feed (maybe via serialization?!) a MultipartFile instance to S3's PutObjectRequest?
No, from S3's API documentation, there is no way for S3 to deserialize to the image file for you after/during the upload.
Or is this all a terrible idea (if so, why?!)?
It depends on your specific case but it is generally not preferred.
If - there are not many users uploading images at the same time, your memory is probably enough to handle.
Else - You can easily get out-of-memory problems.
If you insist on doing so, S3 API can upload an InputStream (If I remember correctly). You can convert your Multipart File to an InputStream.
This SO thread talks about uploading to S3 with InputStream
You can also take a look at File.createTempFile() to create a temp file.
I have been looking at the same thing. Basically you want a user to be able to be able to upload a photo album and have those photos served from S3 and probably have them secured so only that user can upload/delete/etc.
I believe the simpler answer is in spring boot to get a Pre-signed URL from S3. https://docs.aws.amazon.com/AmazonS3/latest/dev/PresignedUrlUploadObjectJavaSDK.html
which basically gives you a token defining the bucket, and object key ("/bobs_profile/smiling_bob.jpg") and a time limit for that image to be uploaded.
Give that to your angular app (or ionic app) to upload the image to that location.
That should do it. but someone let me know if I'm wrong.
The only issue that I see is if bob wants to upload "bobs_nude_photo.jpg" and only wants spring security logged in people to be able to see it... well I'm sure there is an S3 solution for that??
I am trying to host a website in Azure Blob Storage
as discussed here
I have had success with www.mysite.com.au which is redirecting to
( where mysite is not the real name )
http://docs.mysite.com.au/site/index.html ( not a real url )
where docs is a cname with the alias being the blob storage name.
The blob access policy is set to Container
The direct link in Azure is https://mysite.blob.core.windows.net/site/index.html (not the real name)
I am puzzled as to why I cannot go to http://docs.mysite.com.au/site/index.html directly
When I do this I get an error
The requested URI does not represent any resource on the server
I think the answer might be to do with working with blobs not files.
Similar to why "subfolders" cant be created in $root.
[Update]
I also ran into this problem when I deleted index.html and then re-uploaded it.
I can see the file in storage explorer.
I think I will need to revert to an app service.
For hosting static website on Azure Blob Storage, you could leverage the root container ($root) and store your files under the root path as follows:
https://brucchstorage.blob.core.windows.net/index.html
Custom domain: http://brucestorage.conforso.org/index.html
For script and css files, you could create another container (e.g. content), then put script files under content/script/ and css files under content/css/ or you could create each container for storing script and css files.
https://brucchstorage.blob.core.windows.net/content/css/bootstrap.min.css
https://brucchstorage.blob.core.windows.net/content/script/bootstrap.min.js
The requested URI does not represent any resource on the server
AFAIK, the blob in the root container cannot include a forward slash (/) in its name. If you upload blob into root container with the / in its name, then you would retrieve this error.
I think I must have had the custom name set incorrectly in Azure.
It should have been docs.mysite.com.au ( not the real name)
I'm using GroceryCRUD to act as a front end for a database containing news releases. Secretaries can go in and add/edit/delete news releases in the database easily now. Only qualified users are able to access the application root via an .htaccess password. The problem with this is that GroceryCRUD uploads assets such as photos are uploaded to the directory /www/approot/assets/uploads/ which is password protected since /approot/ is protected.
My ideal solution would be to set an upload directory outside of the application root which is where I'm running into trouble. By default this is how GroceryCRUD handles uploads:
$this->grocery_crud->set_field_upload('photo1','assets/uploads/');
I've tried changing it to something like this:
$this->grocery_crud->set_field_upload('photo1','/public/assets/uploads/');
I was hoping this / would make the path start from the document root instead of the application root, but it throws this error:
PHP Fatal error: Uncaught exception 'Exception' with message 'It
seems that the folder "/Users/myusername/www/approot//public/assets/uploads/"
for the field name "photo1" doesn't exists.
This seems to suggest that CI or GroceryCRUD just takes the second argument in set_upload field and just concatenates it onto the end of the site URL that is defined. Is there any way around this that doesn't involve creating a user login system?
Try using relative path.
$this->grocery_crud->set_field_upload('photo1','../assets/uploads/');
.. -> Go up one directory
I ended up implementing a login system outlined in this tutorial:
http://net.tutsplus.com/tutorials/php/easy-authentication-with-codeigniter/
It was quite simple to set up and suits my needs. I found ways to give access to the directory using httpd.conf directives but I feel like this was a more viable solution since I don't have direct access to server configuration files.
Maybe in the future GroceryCRUD will allow placement of uploads outside the application folder.