I want to cache some cropped images and serve them without calculating them again in a Azure WebSite. When I used the Azure VM I was just storing them at the D drive (temporary drive) but I don't know where to store them now.
I could use the Path.GetTempPath but I am not sure if this is the best approach.
Can you suggest me where should I store my Temporary files when I am serving from a Azure WebSite?
Azure Websites also comes with a Temp folder. The path is defined in the environment variable %TEMP%
You can store your images in App_Data folder in the root of your application or you can use Azure CDN for caching.
You could store the processed content on Azure Blob Storage and serve the content from there.
If what you really want is a cache you can also look into using the Azure Redis Cache.
you can use Path.GetTempPath() and Path.GetTempFileName() functions for the temp file name, but you are limited though in terms of space, so if you're doing a 10K save for every request and expect 100,000 requests at a time per server, maybe blob storage is better.
Following sample demonstrate how to save temp file in azure, both Path and Bolb.
Doc is here:https://code.msdn.microsoft.com/How-to-store-temp-files-in-d33bbb10
Code click here:https://github.com/Azure-Samples/storage-blob-dotnet-store-temp-files/archive/master.zip
Related
I already have PlayFramework app runing , but I am in process of migrating it to Heroku. Because on Heroku I can not use local filesystem like I did in my app. I am forced to use Amazon S3 ,but I do not know how to rewrite creating of thumbnails. For that I am using :
https://github.com/coobird/thumbnailator
Thumbnails.of(picture.getFile()).size(300,300).toFile(new File("public/images/data", thumb));
The problem is that I can not do this at heroku,because file wont be saved.
How do I create thumbnails then? If I do not want to use another service which will generate thumbnails for me and save them to s3 somehow...
Honestly if I would know how many different services would I need for simple page with java then I would stay at php forever...
In Heroku (as in many PaaS) there's no persistent filesystem. However, you do have access to temp directory.
So you could save it into a temp file
File temp = File.createTempFile("prefix", "suffix").getAbsolutePath;
Thumbnails.of(picture.getFile()).size(300,300).toFile(temp, thumb));
Then take that file and upload it to S3.
If you strictly insist on not using S3 for storing binary files, then you could base64 the file content and save it into the DB. (see some pros/cons for such approach here)
been struggling with setting up Umbraco on a development machine and test server...
Both environments connect to the same database and I use uSync to keep all my changes in git, however mediafiles are a real p.i.t.a.
I started off by adding media on my dev machine and copying over the media folder when publishing to test. Not very elegant so I tried using the rootPath and rootUrl in the filesystemproviders config. Path points to a network file share and URL to a dedicated virtual directory hosted on a media.test.mysite.com subdomain.
Surprise ... when opening the site the old media is vanished because umbraco saves the absolute path in the cmsProperty tables {'src': 'http://media.mysite.com/1041/...' }, previously the relative path when configuring the virtualRoot.
I'd like to alter the composition of the media url's in both front-and backend. Define a media_root appsetting holding the hostname, protocol and port (http://media.test.mysite.com) and prepending this to the src stuff that comes from the DB...
Any suggestions?
I already tried a custom URLProvider but this only works for non-media content ... it seems :-|
Thanks!
Y.
I'd recommend using the Umbraco File System Provider for Azure which will upload your media to Azure Blob Storage. You can then use the disk cache that comes with ImageProcessor.Web (included in Umbraco Core) to cache the files locally. We run our dev environments pointing to the same blob storage as other environments - so no need to copy the files. And the references are relative (/media/1001/file.jpg) when using Disk Cache thanks to the HTTP module in ImageProcessor.Web which caches them to disk. (You could alternatively use the ImageProcessor Azure blob cache plugin and have the images load from Azure. You might want to check out this documentation at Our.Umbraco.org (even if you aren't using Umbraco Cloud).
I had uploaded files mostly media files to Azure's File Storage, which I am able to see in Azure's Explorer as well. But when I view the file as anonymous user, I am not able to view the file. Tried to check with Permissions setting as well, but to no avail.
Any help would be welcomed :)
Azure files have Shared Access Signatures (SAS). This is a key that you compute with the storage account key, that gives access to a particular URL. Here is an example (storage account name is obfuscated here):
https://mystorageaccount.file.core.windows.net/sampleshare/2.png?sv=2015-04-05&sr=f&si=sampleread&sig=Zq%2BfflhhbAU4CkCuz9q%2BnUFEM%2Fsg2PbXe3L4MeCC9Bo%3D&sip=0.0.0.0-255.255.255.255
You have sample code on how to create a SAS with Azure files at https://azure.microsoft.com/en-us/documentation/articles/storage-dotnet-how-to-use-files/, ยง"Generate a shared access signature for a file or file share".
You can also do it interactively with a number of tools. For instance, CloudXPlorer has this feature.
I want to try AppHarbor, but I have an application which stores uploaded files in certain place on a filesystem. Is it compatible with AppHarbor? Can I store files in the file system and access them later?
(what kind of path can I expect, like c:\blabla something or what?)
Thank you.
You can store files on the local filesystem, but the application directory is wiped on each new deployment so it's not recommended to rely on for file storage.
Instead we recommend that you use a cloud storage service such as Amazon S3, Google Cloud Storage or similar. There are .NET libraries for both services.
We recently wrote a blog post about uploading files directly to S3 and GCS from the browser that you might want to read.
If you are using a background worker, you need to 'Enable File System Write Access' in the settings of you application.
Then, you are permitted access to write to: Path.GetTempPath()
Sourced from this support question: http://support.appharbor.com/discussions/problems/5868-create-directory-in-background-worker
I am a bit stuck with this Windows Azure Blob storage.
I have a controller that receive a file path (local).
So on the web page I do something loke this:
http:...?filepath=C:/temp/myfile.txt
On the web service I want to get this file and put it on the blob service. When I launch it in local there is no problem but when i publish it there is no way to get the file. I always get:
Error encountered: Could not find a part of the path 'C:/temp/myfile.txt'.
Can someone help me. Is there a solution ?
First i would say to get proper help you would need to provide better description about your problem. What do you mean by "On the web service"? Is it a WCF web role which seems to match with your partial problem description. However most of the web service use http://whatever.cloudapp.net/whatever.svc as well as http://whatever.cloudapp.net/whaterever.aspx?whatever if added. Have you done something like that in your application.
You have also mentioned the controller in your code which makes me think it is a MVC based Web Role application.
I am writing above information to help you to formulate your question much better next time.
Finally Based on what you have provided you are reading a file from local file system (C:\temp\myfile.txt) and uploading to Azure Blob. This will work in compute Emulator and sure will fail in Windows Azure because:
In your Web Role code you will not have access to write on C:\ drive and that's why file is not there and you get error. Your best bet is to use Azure Local Storage to write any content there and then use Local Storage to read the file and then upload the Azure Blob. Azure Local storage is designed to write any content from web role (you will have write permission).
Finally, I am concern with your application design also because Azure VM are no persisted so having a solution to write to anywhere in VM is not good and you may need to directly write to Azure storage without using system memory, if that is possible anyways.
Did you verify the file exists on the Azure server?