Programmatically detecting if a folder is a cloud local folder - windows

Is there a way of detecting whether a particular folder is being used as the local store for cloud storage in Windows? The default name of the local cloud store folder seems to be the name of the Cloud provider (eg OneDrive, Google Drive, Dropbox) and local cloud folders are given distinctive icons. Folder and files within the local store also have additional icons indicating their sync status. However, users may rename local cloud store folders. Is there any folder attribute accessible from C# that will allow me to determine if a folder is a local cloud store?

There is a possibility that you can use storageFolder.Provider.Id
documenting link: https://learn.microsoft.com/en-us/uwp/api/windows.storage.storagefolder.provider?view=winrt-19041
exemple: var provider = storageFolder.Provider;
1 - Exemple application in code
2 - Exemple application in code

Related

How do you change the default cache directory when publishing a service in arcmap?

I am trying to publish a service using arcmap and I need to cache the layers on this service when I publish it.
The default cache directory is in a drop down list for which there are no other available options, and this directory is located on the C drive.
However I need to change this so that the caching takes place on the Z drive instead as I have no available space on the C drive.
Can this be done and if so how can this be done? Changing the display cache directory from the arcmap options did not change the location of the cache when publishing a service.
This needs to be set in ArcGIS Server Manager, before publishing in ArcMap.
Instructions are found on ArcGIS Server help page:
By default, the server directories [including cache directories] are installed at <ArcGIS Server
installation directory>/arcgis/server/usr/directories. You can manage
your server directories in Manager by navigating to Site > GIS Server > Directories
So you'd want to add something like:

Umbraco media sharing - Development

been struggling with setting up Umbraco on a development machine and test server...
Both environments connect to the same database and I use uSync to keep all my changes in git, however mediafiles are a real p.i.t.a.
I started off by adding media on my dev machine and copying over the media folder when publishing to test. Not very elegant so I tried using the rootPath and rootUrl in the filesystemproviders config. Path points to a network file share and URL to a dedicated virtual directory hosted on a media.test.mysite.com subdomain.
Surprise ... when opening the site the old media is vanished because umbraco saves the absolute path in the cmsProperty tables {'src': 'http://media.mysite.com/1041/...' }, previously the relative path when configuring the virtualRoot.
I'd like to alter the composition of the media url's in both front-and backend. Define a media_root appsetting holding the hostname, protocol and port (http://media.test.mysite.com) and prepending this to the src stuff that comes from the DB...
Any suggestions?
I already tried a custom URLProvider but this only works for non-media content ... it seems :-|
Thanks!
Y.
I'd recommend using the Umbraco File System Provider for Azure which will upload your media to Azure Blob Storage. You can then use the disk cache that comes with ImageProcessor.Web (included in Umbraco Core) to cache the files locally. We run our dev environments pointing to the same blob storage as other environments - so no need to copy the files. And the references are relative (/media/1001/file.jpg) when using Disk Cache thanks to the HTTP module in ImageProcessor.Web which caches them to disk. (You could alternatively use the ImageProcessor Azure blob cache plugin and have the images load from Azure. You might want to check out this documentation at Our.Umbraco.org (even if you aren't using Umbraco Cloud).

How to get a file after uploading it to Azure Storage

I had uploaded files mostly media files to Azure's File Storage, which I am able to see in Azure's Explorer as well. But when I view the file as anonymous user, I am not able to view the file. Tried to check with Permissions setting as well, but to no avail.
Any help would be welcomed :)
Azure files have Shared Access Signatures (SAS). This is a key that you compute with the storage account key, that gives access to a particular URL. Here is an example (storage account name is obfuscated here):
https://mystorageaccount.file.core.windows.net/sampleshare/2.png?sv=2015-04-05&sr=f&si=sampleread&sig=Zq%2BfflhhbAU4CkCuz9q%2BnUFEM%2Fsg2PbXe3L4MeCC9Bo%3D&sip=0.0.0.0-255.255.255.255
You have sample code on how to create a SAS with Azure files at https://azure.microsoft.com/en-us/documentation/articles/storage-dotnet-how-to-use-files/, ยง"Generate a shared access signature for a file or file share".
You can also do it interactively with a number of tools. For instance, CloudXPlorer has this feature.

How can I set folder permissions for elastic beanstalk windows application?

I am currently building a C# WebApi 2 application that I will be uploading to an Amazon Elastic Beanstalk instance to deploy. I am having success so far, and on my local machine, I just finished testing the file upload capability in order for clients to upload images.
The way it goes is I accept the multipart/formdata in the Web Api and save the temp file (with a random name like BodyPart_24e246c7-a92a-4a3d-84ef-c1651416e667) to the App_Data folder. The temporary file is put into an S3 Bucket and I create a reference in my SQL Server database to it.
Testing works fine with single or multiple file uploads locally but when I deploy the application to Elastic Beanstalk and try to upload I get errors like "Could not find a part of the path 'C:\inetpub\wwwroot\sbeAPI_deploy\App_Data\BodyPart_8f552d48-ed9b-4ec2-9986-88cbffd673ee'" or a similar one saying access is denied altogether.
I have been trying to find the solution online for a few hours now, but the AWS documentation is all over the place and tutorials/other questions seem to be outdated. I believe it has something to do with not having permission to write the temporary files on the EC2 server, but I can't figure out how to fix it.
Thanks very much in advance.
This is already possible since April 2013, see also here: Basically the steps you need to perform are the following:
Create a folder called .ebextensions in the top-level of your project through the solution explorer
Add in this folder your configuration file e.g myapp.config (replace myapp with your Elastic Beanstalk's app name)
Add the code displayed underneath to this configuration file you just created. Replace MyApp with your project name (not solution name) displayed in Visual Studio
All set!! Be sure there's a file within App_Data otherwise Visual Studio won't publish it.
{
"containercommands": {
"01-changeperm": {
"command": "icacls \"C:/inetpub/wwwroot/MyApp_deploy/App_Data\" /grant DefaultAppPool:(OI)(CI)"
}
}
}
To give write permission to your DefaultAppPool you can
create an .ebextensions folder
create a config file and place it in your .ebextensions folder
This will change permission to your wwwroot folder
container_commands:
01-changeperm :
command : 'icacls "C:\\inetpub\\wwwroot" /grant "IIS APPPOOL\DefaultAppPool:(OI)(CI)F"'
I had the same problem (unable to write to a file in the App_Data folder of my web application on Elastic Beanstalk).
In my case it was sufficient to create a dummy file in the App_Data folder in my Visual Studio project. When I did this, the App_Data folder was created during deployment with permissions that allow the web application to write to it.
No need for .ebextensions to change folder permissions.
The App_Data folder does not have write permissions by default, and you would have to set appropriate permissions during deployment of your apps.
Check out this post for a detailed explanation of how to do it: http://thedeveloperspace.com/granting-write-access-to-asp-net-apps-hosted-on-aws-beanstalk/
This question is pretty old but for anyone else who ends up having the same issue. I had the same issue with AWS. Connect to your instance and change the properties for the folder you want to upload files to. Select the folder you want to grant read/write access to. Click on properties and set the permissions that way.
My issue was with uploading images to the server. I couldn't put it in the App_Data folder since that is a special offer reserved for the app only and I needed the images to be accessible through the URL. So I created another folder "Uploads". Published my api then connected to the instance through remote desktop. Located the Content folder, and set the properties to read/write for DefaultAppPool. That solved my problem, hope this helps someone out there.

Access to filesystem on AppHarbor

I want to try AppHarbor, but I have an application which stores uploaded files in certain place on a filesystem. Is it compatible with AppHarbor? Can I store files in the file system and access them later?
(what kind of path can I expect, like c:\blabla something or what?)
Thank you.
You can store files on the local filesystem, but the application directory is wiped on each new deployment so it's not recommended to rely on for file storage.
Instead we recommend that you use a cloud storage service such as Amazon S3, Google Cloud Storage or similar. There are .NET libraries for both services.
We recently wrote a blog post about uploading files directly to S3 and GCS from the browser that you might want to read.
If you are using a background worker, you need to 'Enable File System Write Access' in the settings of you application.
Then, you are permitted access to write to: Path.GetTempPath()
Sourced from this support question: http://support.appharbor.com/discussions/problems/5868-create-directory-in-background-worker

Resources