config cloud storage and headers for default - laravel-5

How to configure that all files, even if they are public, are not cached or how to assign headers by default to all the files that are stored in the cloud storage.
I am uploading files from laravel 5.5.45 using filesystem with the lib Superbalist / laravel-google-cloud-storage and the only thing I can do is assign if the file has a public or private visibility and in private it is not shown to my client side system, then I want to make cloud storage by default when they enter files with public visibility these are not stored in cache because when the client updates the file it takes 1 hour before changing to the new

Related

Can I serve files stored in Google Cloud Storage via a http.FileServer in golang?

I have developed a small web application that runs a web server in golang.
Each user can login, view the list of their docs (previously uploaded) and click on an item to view an html page that shows some fields of the document plus an tag with a src attribute
The src attribute includes an url like "mydocuments/download/123-456-789.pdf"
On the server side I handle the URL ("mydocuments/download/*") via an http Handler
mymux.HandleFunc(pat.Get("/mydocuments/download/:docname"), DocDownloadHandler)
where:
I check that the user has the rights to view the document in the url
Then I create a fileserver that obviously re-maps the url to the real path of the folder where the files are stored on the filesystem of the server
fileServer := http.StripPrefix("/mydocs/download/",http.FileServer(http.Dir("/the-real-path-to-documents-folder/user-specific-folder/)))
and of course I serve the files
fileServer.ServeHTTP(w, r)
IMPORTANT: the directory where the documents are stored is not the static-files directory I sue for the website but a directory where all files end after being uploaded by users.
My QUESTION
As I am trying to convert the code for it to work also on Google Cloud, I am trying to change the code so that files are stored in a bucket (or, better in "sub-directories" -as they do not properly exist- of a bucket).
How can I modify the code so to map the real document url as available via the cloud storage bucket?
Can I still use the http.FileServer technique above (if so what should I use instead of http.Dir to map the bucket "sub-folder" path where the documents are stored)?
I hope I was enough clear to explain my issue...
I apologise in advance for any unclear point...
Some options are:
Give the user direct access to the resource using a signed URL.
Write code to proxy the request to GCS.
Use http.FS with an fs.FS backed by GCS.
It's possible that a fs.FS for GCS already exists, but you may need to write one.
You can use http.FileSystem since it is an interface and can be implemented however you like.

The requested URI does not represent any resource on the server

I am trying to host a website in Azure Blob Storage
as discussed here
I have had success with www.mysite.com.au which is redirecting to
( where mysite is not the real name )
http://docs.mysite.com.au/site/index.html ( not a real url )
where docs is a cname with the alias being the blob storage name.
The blob access policy is set to Container
The direct link in Azure is https://mysite.blob.core.windows.net/site/index.html (not the real name)
I am puzzled as to why I cannot go to http://docs.mysite.com.au/site/index.html directly
When I do this I get an error
The requested URI does not represent any resource on the server
I think the answer might be to do with working with blobs not files.
Similar to why "subfolders" cant be created in $root.
[Update]
I also ran into this problem when I deleted index.html and then re-uploaded it.
I can see the file in storage explorer.
I think I will need to revert to an app service.
For hosting static website on Azure Blob Storage, you could leverage the root container ($root) and store your files under the root path as follows:
https://brucchstorage.blob.core.windows.net/index.html
Custom domain: http://brucestorage.conforso.org/index.html
For script and css files, you could create another container (e.g. content), then put script files under content/script/ and css files under content/css/ or you could create each container for storing script and css files.
https://brucchstorage.blob.core.windows.net/content/css/bootstrap.min.css
https://brucchstorage.blob.core.windows.net/content/script/bootstrap.min.js
The requested URI does not represent any resource on the server
AFAIK, the blob in the root container cannot include a forward slash (/) in its name. If you upload blob into root container with the / in its name, then you would retrieve this error.
I think I must have had the custom name set incorrectly in Azure.
It should have been docs.mysite.com.au ( not the real name)

Upload public object to Google Cloud Storage with public link

I've searched everywhere on this but I cannot find a solution, how can I upload a public object to my google cloud storage, I want to have it so once the image is uploaded it can be viewed by anyone in the world.
It seems I can only get this done if I manually click the public link in google storage, but I want to have it so I can automatically make these objects public through googles api .
The web interface doesn't provide a way to make the objects being uploaded public automatically, but you can do one of two things:
If you want to just make objects publicly readable during one particular session you could use gsutil to do it, e.g.,
gsutil -m cp -a public-read dir/* gs://your-bucket
If you want to make objects publicly readable across all future sessions you could set a default object ACL on the bucket, using a command like:
gsutil defacl set public-read gs://your-bucket
If you do that, uploads via the web interface (as well as by any other API requests, e.g., gsutil cp commands) will be made publicly readable automatically.

How to map filesystem path to http url in spring

I'm working on a spring web mvc project which allows users to upload files. I am saving these uploaded files out of application context so that they persist across deployments. Saving file is working fine. I want to know the best way to convert the file system path to HTTP url so that it can be saved in the database and also used in HTML resource like etc.
Thanks in advance.
As you want to access a file in the filesystem like a static resource using spring mvc, the answer (taken from here) is to serve the static resources adding an entry like in your servlet context:
Example:
<mvc:resources mapping="/images/**" location="file:/absolute/path/to/the/resource/folder/" />
In this case you store all the files in the same path in the same server using only one resource, then you could only store in the db the filename.
In your html tag you need to use a relative url (being aware of your context, so you could access your file like http://yourhost:port/context/resource/yourfile).
In case you want to store files in a different server then you should add another resource origin (but it must be available as a file system path to the other server), so in that case it would make sense to store in the database a value like "resourcename/filename".

Dynamically updating config data codeigniter

I have created my custom config file to store information about site such as if it is online or offline like-vice.
For that I have created new file in config folder and stores default values in global $config[] array with my own index.
I want to update these config data dynamically with admins control eg. he can select to put site in offline mode.
For that I have used function
$this->config->set_item('config_array_index','value_to_set');
but, I don't know why it is not working for me ?
I am not able to see any update in my config file. Also I am autoloading my config file.
Am I missing something?
Setting a config item only applies to the current session - it does not overwrite your actually codeigniter files.
If you want to permanetly take a site offline, your'll need some sort of persistant storage, such as a value in a database that is checked/updated as needed
you can create an empty config file, within your config directory , and then append your data to it using a functionality like fwrite ( you can check for some other CI function to use ).
and add it to your autoload file .

Resources