Identify or Ignore or Stop malicious files when copying files between Azure storage accounts using AZCopy - azure-blob-storage

scenario
Source storage account[X] can store any type of files[may be malicious] which user uploads.
Target storage account[Y] is personal and protected by windows defender for cloud.
We are using azcopy in PS7 to transfer files between storage accounts.
Problem
When copying files[using AZCopy] from local user folder to target storage account[Y], malicious files were blocked and exception was thrown during copying. But same is not happening with malicious files copied from source azure storage account[X] to Target azure storage account[Y].
We tried copying files to VM from Source storage account[X] and again use AZCopy to copy files from VM to Target storage account[Y], But this approach is expensive in both cost and time.
Do I need to set any specific setting in Azure Target storage account[Y] to block malicious files?
Note: Windows Defender for cloud is enabled in Target storage account[Y]
Please share your suggestions.

Related

how can i create automatic file backup to save to cloud storage?

I have 4 workstations (windows) that I need to backup and save to gloud storage, I would like it to be automatically. it is possible?
You can imagine to set up, on each workstation a planned task that performs a gcloud rsync regularly in dedicated folder on Google Cloud Storage (and that get the correct folder from the local workstation)

How do you change the default cache directory when publishing a service in arcmap?

I am trying to publish a service using arcmap and I need to cache the layers on this service when I publish it.
The default cache directory is in a drop down list for which there are no other available options, and this directory is located on the C drive.
However I need to change this so that the caching takes place on the Z drive instead as I have no available space on the C drive.
Can this be done and if so how can this be done? Changing the display cache directory from the arcmap options did not change the location of the cache when publishing a service.
This needs to be set in ArcGIS Server Manager, before publishing in ArcMap.
Instructions are found on ArcGIS Server help page:
By default, the server directories [including cache directories] are installed at <ArcGIS Server
installation directory>/arcgis/server/usr/directories. You can manage
your server directories in Manager by navigating to Site > GIS Server > Directories
So you'd want to add something like:

Umbraco media sharing - Development

been struggling with setting up Umbraco on a development machine and test server...
Both environments connect to the same database and I use uSync to keep all my changes in git, however mediafiles are a real p.i.t.a.
I started off by adding media on my dev machine and copying over the media folder when publishing to test. Not very elegant so I tried using the rootPath and rootUrl in the filesystemproviders config. Path points to a network file share and URL to a dedicated virtual directory hosted on a media.test.mysite.com subdomain.
Surprise ... when opening the site the old media is vanished because umbraco saves the absolute path in the cmsProperty tables {'src': 'http://media.mysite.com/1041/...' }, previously the relative path when configuring the virtualRoot.
I'd like to alter the composition of the media url's in both front-and backend. Define a media_root appsetting holding the hostname, protocol and port (http://media.test.mysite.com) and prepending this to the src stuff that comes from the DB...
Any suggestions?
I already tried a custom URLProvider but this only works for non-media content ... it seems :-|
Thanks!
Y.
I'd recommend using the Umbraco File System Provider for Azure which will upload your media to Azure Blob Storage. You can then use the disk cache that comes with ImageProcessor.Web (included in Umbraco Core) to cache the files locally. We run our dev environments pointing to the same blob storage as other environments - so no need to copy the files. And the references are relative (/media/1001/file.jpg) when using Disk Cache thanks to the HTTP module in ImageProcessor.Web which caches them to disk. (You could alternatively use the ImageProcessor Azure blob cache plugin and have the images load from Azure. You might want to check out this documentation at Our.Umbraco.org (even if you aren't using Umbraco Cloud).

How to get a file after uploading it to Azure Storage

I had uploaded files mostly media files to Azure's File Storage, which I am able to see in Azure's Explorer as well. But when I view the file as anonymous user, I am not able to view the file. Tried to check with Permissions setting as well, but to no avail.
Any help would be welcomed :)
Azure files have Shared Access Signatures (SAS). This is a key that you compute with the storage account key, that gives access to a particular URL. Here is an example (storage account name is obfuscated here):
https://mystorageaccount.file.core.windows.net/sampleshare/2.png?sv=2015-04-05&sr=f&si=sampleread&sig=Zq%2BfflhhbAU4CkCuz9q%2BnUFEM%2Fsg2PbXe3L4MeCC9Bo%3D&sip=0.0.0.0-255.255.255.255
You have sample code on how to create a SAS with Azure files at https://azure.microsoft.com/en-us/documentation/articles/storage-dotnet-how-to-use-files/, ยง"Generate a shared access signature for a file or file share".
You can also do it interactively with a number of tools. For instance, CloudXPlorer has this feature.

Access to filesystem on AppHarbor

I want to try AppHarbor, but I have an application which stores uploaded files in certain place on a filesystem. Is it compatible with AppHarbor? Can I store files in the file system and access them later?
(what kind of path can I expect, like c:\blabla something or what?)
Thank you.
You can store files on the local filesystem, but the application directory is wiped on each new deployment so it's not recommended to rely on for file storage.
Instead we recommend that you use a cloud storage service such as Amazon S3, Google Cloud Storage or similar. There are .NET libraries for both services.
We recently wrote a blog post about uploading files directly to S3 and GCS from the browser that you might want to read.
If you are using a background worker, you need to 'Enable File System Write Access' in the settings of you application.
Then, you are permitted access to write to: Path.GetTempPath()
Sourced from this support question: http://support.appharbor.com/discussions/problems/5868-create-directory-in-background-worker

Resources