The requested URI does not represent any resource on the server - azure-blob-storage

I am trying to host a website in Azure Blob Storage
as discussed here
I have had success with www.mysite.com.au which is redirecting to
( where mysite is not the real name )
http://docs.mysite.com.au/site/index.html ( not a real url )
where docs is a cname with the alias being the blob storage name.
The blob access policy is set to Container
The direct link in Azure is https://mysite.blob.core.windows.net/site/index.html (not the real name)
I am puzzled as to why I cannot go to http://docs.mysite.com.au/site/index.html directly
When I do this I get an error
The requested URI does not represent any resource on the server
I think the answer might be to do with working with blobs not files.
Similar to why "subfolders" cant be created in $root.
[Update]
I also ran into this problem when I deleted index.html and then re-uploaded it.
I can see the file in storage explorer.
I think I will need to revert to an app service.

For hosting static website on Azure Blob Storage, you could leverage the root container ($root) and store your files under the root path as follows:
https://brucchstorage.blob.core.windows.net/index.html
Custom domain: http://brucestorage.conforso.org/index.html
For script and css files, you could create another container (e.g. content), then put script files under content/script/ and css files under content/css/ or you could create each container for storing script and css files.
https://brucchstorage.blob.core.windows.net/content/css/bootstrap.min.css
https://brucchstorage.blob.core.windows.net/content/script/bootstrap.min.js
The requested URI does not represent any resource on the server
AFAIK, the blob in the root container cannot include a forward slash (/) in its name. If you upload blob into root container with the / in its name, then you would retrieve this error.

I think I must have had the custom name set incorrectly in Azure.
It should have been docs.mysite.com.au ( not the real name)

Related

Can I serve files stored in Google Cloud Storage via a http.FileServer in golang?

I have developed a small web application that runs a web server in golang.
Each user can login, view the list of their docs (previously uploaded) and click on an item to view an html page that shows some fields of the document plus an tag with a src attribute
The src attribute includes an url like "mydocuments/download/123-456-789.pdf"
On the server side I handle the URL ("mydocuments/download/*") via an http Handler
mymux.HandleFunc(pat.Get("/mydocuments/download/:docname"), DocDownloadHandler)
where:
I check that the user has the rights to view the document in the url
Then I create a fileserver that obviously re-maps the url to the real path of the folder where the files are stored on the filesystem of the server
fileServer := http.StripPrefix("/mydocs/download/",http.FileServer(http.Dir("/the-real-path-to-documents-folder/user-specific-folder/)))
and of course I serve the files
fileServer.ServeHTTP(w, r)
IMPORTANT: the directory where the documents are stored is not the static-files directory I sue for the website but a directory where all files end after being uploaded by users.
My QUESTION
As I am trying to convert the code for it to work also on Google Cloud, I am trying to change the code so that files are stored in a bucket (or, better in "sub-directories" -as they do not properly exist- of a bucket).
How can I modify the code so to map the real document url as available via the cloud storage bucket?
Can I still use the http.FileServer technique above (if so what should I use instead of http.Dir to map the bucket "sub-folder" path where the documents are stored)?
I hope I was enough clear to explain my issue...
I apologise in advance for any unclear point...
Some options are:
Give the user direct access to the resource using a signed URL.
Write code to proxy the request to GCS.
Use http.FS with an fs.FS backed by GCS.
It's possible that a fs.FS for GCS already exists, but you may need to write one.
You can use http.FileSystem since it is an interface and can be implemented however you like.

How do I save to RFC 5785 (The folder .well-known) in Azure Blob?

I need to save the Apple-Site-Association file to the .well-known top level directory in Azure Blob Storage.
How can I save a file to this url?
I need to save the Apple-Site-Association file to the .well-known top level directory in Azure Blob Storage.
The base URI of a blob includes the name of the account, the name of the container, and the name of the blob:
https://{storageaccount}.blob.core.windows.net/{containername}/{blobname}
If you specify container name with “.well-known”, which violate naming rules, because container names must start with a letter or number.

Read "public" file content in a Revel app

I am currently writing a Go web app using Revel.
My app needs to read the content of an XML file which is stored on the server. At the moment, I store this file in the "public" folder where some other resources (css, js...) lie.
I am using ioutil.ReadFile to read the content of this file. While this is working when the server is run from the main app folder itself, I cannot figure how to access the file when the server is run from another location (say by running "revel run myapp" from $GOPATH).
Is there any way to deal with this situation in revel?
is there a generic way to know the path of the "public" folder?
Any hint would be appreciated.
Thanks! :)
The base path of the application is stored and accessible through revel.BasePath.
The "public" folder can thus be accessed through revel.BasePath + "/public/<...>".
This BasePath value is used, for example, in Static.Serve.

With GroceryCrud, how can I put uploads above application root?

I'm using GroceryCRUD to act as a front end for a database containing news releases. Secretaries can go in and add/edit/delete news releases in the database easily now. Only qualified users are able to access the application root via an .htaccess password. The problem with this is that GroceryCRUD uploads assets such as photos are uploaded to the directory /www/approot/assets/uploads/ which is password protected since /approot/ is protected.
My ideal solution would be to set an upload directory outside of the application root which is where I'm running into trouble. By default this is how GroceryCRUD handles uploads:
$this->grocery_crud->set_field_upload('photo1','assets/uploads/');
I've tried changing it to something like this:
$this->grocery_crud->set_field_upload('photo1','/public/assets/uploads/');
I was hoping this / would make the path start from the document root instead of the application root, but it throws this error:
PHP Fatal error: Uncaught exception 'Exception' with message 'It
seems that the folder "/Users/myusername/www/approot//public/assets/uploads/"
for the field name "photo1" doesn't exists.
This seems to suggest that CI or GroceryCRUD just takes the second argument in set_upload field and just concatenates it onto the end of the site URL that is defined. Is there any way around this that doesn't involve creating a user login system?
Try using relative path.
$this->grocery_crud->set_field_upload('photo1','../assets/uploads/');
.. -> Go up one directory
I ended up implementing a login system outlined in this tutorial:
http://net.tutsplus.com/tutorials/php/easy-authentication-with-codeigniter/
It was quite simple to set up and suits my needs. I found ways to give access to the directory using httpd.conf directives but I feel like this was a more viable solution since I don't have direct access to server configuration files.
Maybe in the future GroceryCRUD will allow placement of uploads outside the application folder.

Web Server Session-Based Caching Security Issue?

I am wondering if what I'm doing is a good practice. Please advise. Thanks.
My web application server caches generated chart images for users to enhance performance.
The images are stored in session-based folders, where the folder name is generated.
Let's say user1 plotted a chart and is cached on the server here:
webapp\sessionFolder\aklfq13d10jd10\image.jpg
I disabled IIS7 directory browsing.
But I find that other users of the system, can access the image too, if they input the full url. But they're not supposed to see it as it is cached for user1.
How can I avoid such illegal accesses? Or is there a better practice to implement such web caching?
Thank you!
Kyeo
A better approach would be to cache images in a directory that is not accessible to the client (for example, a subdirectory of App_Data), then have a handler that streams the contents of files from this directory to authorized users.
If the files are specific to a user, you could for example store the images in folder names derived from the username:
App_Data\TempImages\User1
App_Data\TempImages\User2
Then the handler that streams the content will only stream files for the current logged on user, something like (modulo a bit of error handling):
string path = Path.Combine(
AppDomain.CurrentDomain.BaseDirectory,
"App_Data\TempImages",
HttpContext.Current.User.Identity.Name,
Request.QueryString["imageFileName"]);
... stream image at path if it exists ...
You could use the sessionId as an identifier instead of the username, but in this case the cached data will become inaccessible whenever the session times out.

Resources