I've been going around the internet searching for a solution, and apparently there doesn't have to be one. I even asked in Microsoft's support forum but they didn't help.
I'm using OSX.
Error: Storage account *** has container(s) which have an active image and/or disk artifacts. Ensure those artifacts are removed from the image repository before deleting this storage account.
at Object.ServiceClient._normalizeError (/usr/local/azure/node_modules/azure/lib/services/core/serviceclient.js:682:23)
at Object.ServiceClient._processResponse (/usr/local/azure/node_modules/azure/lib/services/core/serviceclient.js:335:32)
at Request.ServiceClient._performRequest.self._buildRequestOptions.processResponseCallback [as _callback] (/usr/local/azure/node_modules/azure/lib/services/core/serviceclient.js:183:35)
at Request.init.self.callback (/usr/local/azure/node_modules/azure/node_modules/request/index.js:148:22)
at Request.EventEmitter.emit (events.js:99:17)
at Request.onResponse (/usr/local/azure/node_modules/azure/node_modules/request/index.js:891:14)
at Request.EventEmitter.emit (events.js:126:20)
at IncomingMessage.Request.onResponse.buffer (/usr/local/azure/node_modules/azure/node_modules/request/index.js:842:12)
at IncomingMessage.EventEmitter.emit (events.js:126:20)
at IncomingMessage._emitEnd (http.js:366:10)
at storageAccount_deleteCommand__2 (/usr/local/azure/lib/commands/storage.account._js:135:8)
I have an empty image, disk, and container
Empty Disk
Empty Container
You most likely have one or more blob with an active lease. See this blog post on MSDN on how to use node.js (the basis of the Azure cross-platform CLI) to break a lease: http://blogs.msdn.com/b/interoperability/archive/2012/08/01/node-js-script-for-releasing-a-windows-azure-blob-lease.aspx
Related
I'm not able to download large file from Azure Storage Blob container (using SAS ) to C:\Download. I tried Azure Storage Explorer 1.10.1 / 1.20.0 / 1.20.1 with Windows Server version 2012 R2 / 2019.
I also tried AzCopy. It is vhd file with size 127 GB.
It runs for about 35 minutes and then fails.
What is wrong with that? Can you please provide me with a working solution?
As suggested by #gaurav mantri
When the download from Azure Storage Blob container fails try
checking the SAS Expiry date (se parameter in your SAS token) as
your download could fail if the SAS token expires.
If SAS token is not expired, look into azcopy logs which can have
information about the failure.
Thank you #gaurav mantri for the suggestion.
Solution found by #majkl :
Log has the MD5 hash of the data which did not match the expected value, as found in the Blob/File Service. This means that either there is a data integrity error OR another tool has failed to keep the stored hash up to date. When Checking MD5 hash.
Downloading large file from Azure Storage Blob container was successful when the parameter "--check-md5 NoCheck" is used .
I went to deploy over an existing Cloud Service (in staging) and received the following message:
"Error: No deployments were found. Http Status Code: NotFound"
Does anyone know what this means?
I am looking at the Cloud Service, and it surely exists.
UPDATE:
Been using the same deploy method as prior (successful) efforts. However, I simply right click the cloud service in Visual Studio 2013. In the Windows Azure Publish Summary, I set to: the correct cloud service name, to staging, to realease ... and press publish. Nothing special really...which is why I am perplexed
You may have exceeded the maximum number of cores allowed on your Azure subscription. Either remove unneeded deployments or ask Microsoft to increase the maximum allowed cores on your Azure subscription.
Since I had this problem and none of the answers above were the cause... I had to dig a little bit more. The RoleName specified in the Role tag must of course match the one in the EndpointAcl tag.
<Role name="TheRoleName">
<Instances count="1" />
</Role>
<NetworkConfiguration>
<AccessControls>
<AccessControl name="ac-name-1">
<Rule action="deny" description="TheWorld" order="100" remoteSubnet="0.0.0.0/32" />
</AccessControl>
</AccessControls>
<EndpointAcls>
<EndpointAcl role="TheRoleName" endPoint="HTTP" accessControl="ac-name-1" />
<EndpointAcl role="TheRoleName" endPoint="HTTPS" accessControl="ac-name-1" />
</EndpointAcls>
</NetworkConfiguration>
UPDATE
It seems that the previous situation is not the only one causing this error.
I ran into it again now due to a related but still different mismatch.
In the file ServiceDefinition.csdef the <WebRole name="TheRoleName" vmsize="Standard_D1"> tag must have a vmsize that exists (of course!) but according to Microsoft here (https://azure.microsoft.com/en-us/documentation/articles/cloud-services-sizes-specs/) the value Standard_D1_v2 should also be accepted.
At the moment it was causing this same error... once I removed the _v2 it worked fine.
Conclusion: everytime something is wrong in the Azure cfgs this error message might come along... it is then necessary to find out where it came from.
Just to add some info.
The same occured to me, my WM Size was setted to a size that was "Wrong".
I have multiple subscriptions, I was pointing one of them, and using a machine "D2", I don't know what happened, the information was refreshed and this machine disappeared as an option. I then selected "Large" (old), and worked well.
Lost 6 hours trying to upload this #$%#$% package.
I think the problem can be related to any VM Size problem
I hit this problem after resizing my role from small to extra-small. I still had the Local Storage set to the default of 20GB, which an extra-small instance can't hold. I ended up reducing it to 100MB and the deployment worked (the role I'm deploying is in maintenance mode only for a couple of months, so I don't care much about getting diagnostics from it).
A quick tip: I was getting nowhere debugging this with Visual Studio's error message. On a whim, I switched to the azure website and manually uploaded the package. That finally gave me a useful error: that VM size was too small for the resources I had requested.
I encountered this error during the initial deployment of a Cloud Service that required a specific SSL Certificate... that was missing from Azure.
Corrected the certificate - deploy succeeded.
(After the first deployment Visual Studio provides a meaningful error in this case.)
I have one application on Windows Azure cloud and I'm using Windows Azure Co-Located Cache.
Some times, when I publish the website/webservice, this error appears when I call the DataCacheFactory.GetCache method:
Cache referred to does not exist. Contact administrator or use the Cache administration tool to create a Cache.
This problem can go away after few moments, but some times it never fix, then I need to publish projects again.
The stacktrace is:
Microsoft.ApplicationServer.Caching.DataCache.ThrowException(ErrStatus errStatus, Guid trackingId, Exception responseException, Byte[][] payload,
EndpointID destination) at Microsoft.ApplicationServer.Caching.DataCacheFactory.EstablishConnection(IEnumerable`1 servers, RequestBody request, Func`3
sendMessageDelegate, DataCacheReadyRetryPolicy retryPolicy) at Microsoft.ApplicationServer.Caching.SocketClientProtocol.Initialize(IEnumerable`1 servers)
at Microsoft.ApplicationServer.Caching.DataCacheFactory.GetCache(String cacheName, CreateNewCacheDelegate cacheCreationDelegate,
DataCacheInitializationViaCopyDelegate initializeDelegate)
See this link whether it can help you...
http://www.windowsazure.com/en-us/develop/net/how-to-guides/cache/#comment-743576866
we were missing the required blob storage container on local
devstorage. After creating the following container :
'cacheclusterconfigs' everything seems to be working now
'cacheclusterconfigs' container will be created by the service internally.you may accidentally deleted that.
Note: IMO please verify the cache name. By default you will be using the cache named 'default'.
I am deploying an MVC 3.0 web app to Windows Azure. I have an action method that takes a file uploaded by the user and stores it in a folder within my web app.
How could I give RW permissions to that folder to the running process? I read about start up tasks and have a basic understanding, but I wouldn't know,
How to give the permission itself, and
Which running process (user) should I give the permission to.
Many thanks for the help.
EDIT
In addition to #David's answer below, I found this link extremely useful:
https://www.windowsazure.com/en-us/develop/net/how-to-guides/blob-storage/
For local storage, I wouldn't get caught up with granting access permissions to various directories. Instead, take advantage of the storage resources available specifically to your running VM's. With a given instance size, you have local storage available to you ranging from 20GB to almost 2TB (full sizing details here). To take advantage of this space, you'd create local storage resources within your project:
Then, in code, grab a drive letter to that storage:
var storageRoot = RoleEnvironment.GetLocalResource("moreStorage").RootPath;
Now you're free to use that storage. And... none of that requires any startup tasks or granting of permissions.
Now for the caveat: This is storage that's local to each running instance, and isn't shared between instances. Further, it's non-durable - if the disk crashes, the data is gone.
For persistent, durable file storage, Blob Storage is a much better choice, as it's durable (triple-replicated within the datacenter, and geo-replicated to another datacenter) and it's external to your role instances, accessible from any instance (or any app, including your on-premises apps).
Since blob storage is organized by container, and blobs within container, it's fairly straightforward to organize your blobs (and store pretty much anything in a given blob, up to 200GB each). Also, it's trivial to upload/download files to/from blobs, either to file streams or local files (in the storage resources you allocated, as illustrated above).
Suppose my Azure role creates a lot of temporary files in Windows temporary folder and forgets to delete them. At some point it will receive "can't create temporary file" error. Suppose that once that happens my role code throws an exception out of RoleEntryPoint.Run() and the role is restarted.
I'm not talking about perfect Azure aware code here. My role might use third-party black box code that would now nothing about Azure and "local storage" and would just call System.IO.Path.GetTempPath() and thus create files right in some not Azure friendly location.
The problem is that if the role is started on the very same host and the temporary folder is not cleaned up by some third party the folder is still full of files and the role will be unable to function. According to this answer it might happen that local changes are preserved for my role which is a huge problem in the above scenario.
Are local changes like created temporary files guaranteed to be reset when a role is restarted? How do I ensure that the started role is in reasonably clean state?
The role gets reset on new deployments, upgrades, and newly scaled instances from the golden image (base guest OS vhd). Generally for reboots and crashes, you get the same VHD and machine.
The code you write will not have permission to write to the OS drive (D:) - without elevation that is (or logging in via RDP to do this). Further, there is a quota on the user's role root drive (E:) that will prevent you from accidentally filling the drive with files. This used to be 10% of the package size was all you were allowed to write. There is also a quota on the resource drive (C:), but that is much more generous and depends on VM size.
Nothing will be cleaned up on non-local resource drives but you will eventually get errors if you try to exceed quotas. You can turn off sticky storage on local resources and they will be cleaned up on reboot. Of course, like other changes to the disk, these non-local resource temp files will occasionally be lost when the guest OS is upgraded (or underlying root OS). If you are running elevated and really screw up your installation (which you can do), you will need to hit the "Reimage" button on the portal and it will all go back to the golden image.