I'm not able to download large file from Azure Storage Blob container (using SAS ) to C:\Download. I tried Azure Storage Explorer 1.10.1 / 1.20.0 / 1.20.1 with Windows Server version 2012 R2 / 2019.
I also tried AzCopy. It is vhd file with size 127 GB.
It runs for about 35 minutes and then fails.
What is wrong with that? Can you please provide me with a working solution?
As suggested by #gaurav mantri
When the download from Azure Storage Blob container fails try
checking the SAS Expiry date (se parameter in your SAS token) as
your download could fail if the SAS token expires.
If SAS token is not expired, look into azcopy logs which can have
information about the failure.
Thank you #gaurav mantri for the suggestion.
Solution found by #majkl :
Log has the MD5 hash of the data which did not match the expected value, as found in the Blob/File Service. This means that either there is a data integrity error OR another tool has failed to keep the stored hash up to date. When Checking MD5 hash.
Downloading large file from Azure Storage Blob container was successful when the parameter "--check-md5 NoCheck" is used .
Related
i've created all(as i think) as described in this article
building-snowpipe-on-azure-blob
snowflake
azure blob storage
snowpipe
but pipe works only after run "alter pipe myPipe refresh"
data loading correctly, but auto_ingest doesn't work.
please give an advice how to find an issue.
The refreshing pipe command fetches files directly from the stage while the auto-ingest option doesn't take the same route and consume messages from the Azure queue storage. Therefore, even if the Azure blob storage container is correct, the message could be delivered to the queue but not to Snowflake.
Solution Details: https://community.snowflake.com/s/article/Ingesting-new-files-with-Snowpipe-for-Azure-fails-while-refreshing-the-pipe-works
Our requirement is to manual back up of Azure DB(MI) when an event is triggered. This db is TDE enabled. I cannot decrypt and take manual backup as decrypting db takes long time. Is there any way to back up of db (export to Azure storage) manually and export it to Azure storage?
If you say that removing the TDE takes too much time and it's not an option, then you can export the database to a BACPAC file.
If removing TDE takes too much time I can assume that the DB has a considerable size and in that case I advise to use SQLPackage running on a Azure VM on the same region where the MI is.
Be alerted that the BACPAC file is not encrypted.
But if you need a backup file, then you need to remove the TDE or copy the database and on the copy remove the TDE, then execute the backup.
You could switch from service-managed TDE to customer-managed TDE, with protector key kept in Azure Key Vault. You can take copy-only backups of your database(s) as long as they are encrypted with that key. Please be aware that you need to keep the protector key(s) as long as you keep backup files. To restore such a backup to the same or some other managed instance, you need to provide access to the key to the destination instance first.
I am attempting to use the Microsoft Azure Storage Explorer, attaching with a SAS URI. But I always get the error:
Inadequate resource type access. At least service-level ('s') access
is required.
Here is my SAS URI with portions obfuscated:
https://ti<...>hare.blob.core.windows.net/?sv=2018-03-28&ss=b&srt=co&sp=rwdl&se=2027-07-01T00:00:00Z&st=2019-07-01T00:00:00Z&sip=52.<...>.235&spr=https&sig=yD%2FRUD<...>U0%3D
And here is my connection string with portions obfuscated:
BlobEndpoint=https://tidi<...>are.blob.core.windows.net/;QueueEndpoint=https://tidi<...>hare.queue.core.windows.net/;FileEndpoint=https://ti<...>are.file.core.windows.net/;TableEndpoint=https://tid<...>hare.table.core.windows.net/;SharedAccessSignature=sv=2018-03-28&ss=b&srt=co&sp=rwdl&se=2027-07-01T00:00:00Z&st=2019-07-01T00:00:00Z&sip=52.<...>.235&spr=https&sig=yD%2FRU<...>YU0%3D
It seems like the problem is with the construction of my URI/endpoints/connectionstring/etc, more than with permissions granted me on the server, due to the fact that when I click Next, the error displays instantaneously. I do not believe it even tried to reach out to the server.
What am I doing wrong? (As soon as I get this working, I'll be using the URI/etc to embed in my C# app for programmatic access.)
What you need to connect is a service requirement the "SRT" part of the URI.
The URI you have has a SRT of "CO" container and object and needs the "S" part, you need to create a new sas key this can be generated in portal, azure cli or powershell.
In the portal is this part:
You have to enter to the storage acount and select what you need:
Allowed services (if you are looking for blob)
Blob
Allowed resource types
Service (make sure this one is activated)
Container
Object
Allowed permissions (this to do everything)
Read
Write
Delete
List
Add
Create
Example where to look
If you need more info look here:
https://learn.microsoft.com/en-us/rest/api/storageservices/create-account-sas?redirectedfrom=MSDN
If you like to create the SAS key in the CLI use this:
https://learn.microsoft.com/en-us/azure/storage/blobs/storage-blob-user-delegation-sas-create-cli
If you like to create the SAS key in powershell use this:
https://learn.microsoft.com/en-us/azure/storage/blobs/storage-blob-user-delegation-sas-create-powershell
I has a similar issue trying to connect to the blob container using a Shared Access Signature (SAS) URL, and this worked for me:
Instead of generating the SAS URL in Azure Portal, I used Azure Storage Explorer.
Right click the container that you want to share -> "Get Shared Access Signature"
Select the Expiry time and permissions and click create
This URL should work when your client/user will try to connect to the container.
Cheers
I had the same problem and managed to get this to work by hacking the URL and changing "srt=co" to "srt=sco". It seems to need the "s".
I've been going around the internet searching for a solution, and apparently there doesn't have to be one. I even asked in Microsoft's support forum but they didn't help.
I'm using OSX.
Error: Storage account *** has container(s) which have an active image and/or disk artifacts. Ensure those artifacts are removed from the image repository before deleting this storage account.
at Object.ServiceClient._normalizeError (/usr/local/azure/node_modules/azure/lib/services/core/serviceclient.js:682:23)
at Object.ServiceClient._processResponse (/usr/local/azure/node_modules/azure/lib/services/core/serviceclient.js:335:32)
at Request.ServiceClient._performRequest.self._buildRequestOptions.processResponseCallback [as _callback] (/usr/local/azure/node_modules/azure/lib/services/core/serviceclient.js:183:35)
at Request.init.self.callback (/usr/local/azure/node_modules/azure/node_modules/request/index.js:148:22)
at Request.EventEmitter.emit (events.js:99:17)
at Request.onResponse (/usr/local/azure/node_modules/azure/node_modules/request/index.js:891:14)
at Request.EventEmitter.emit (events.js:126:20)
at IncomingMessage.Request.onResponse.buffer (/usr/local/azure/node_modules/azure/node_modules/request/index.js:842:12)
at IncomingMessage.EventEmitter.emit (events.js:126:20)
at IncomingMessage._emitEnd (http.js:366:10)
at storageAccount_deleteCommand__2 (/usr/local/azure/lib/commands/storage.account._js:135:8)
I have an empty image, disk, and container
Empty Disk
Empty Container
You most likely have one or more blob with an active lease. See this blog post on MSDN on how to use node.js (the basis of the Azure cross-platform CLI) to break a lease: http://blogs.msdn.com/b/interoperability/archive/2012/08/01/node-js-script-for-releasing-a-windows-azure-blob-lease.aspx
I have one application on Windows Azure cloud and I'm using Windows Azure Co-Located Cache.
Some times, when I publish the website/webservice, this error appears when I call the DataCacheFactory.GetCache method:
Cache referred to does not exist. Contact administrator or use the Cache administration tool to create a Cache.
This problem can go away after few moments, but some times it never fix, then I need to publish projects again.
The stacktrace is:
Microsoft.ApplicationServer.Caching.DataCache.ThrowException(ErrStatus errStatus, Guid trackingId, Exception responseException, Byte[][] payload,
EndpointID destination) at Microsoft.ApplicationServer.Caching.DataCacheFactory.EstablishConnection(IEnumerable`1 servers, RequestBody request, Func`3
sendMessageDelegate, DataCacheReadyRetryPolicy retryPolicy) at Microsoft.ApplicationServer.Caching.SocketClientProtocol.Initialize(IEnumerable`1 servers)
at Microsoft.ApplicationServer.Caching.DataCacheFactory.GetCache(String cacheName, CreateNewCacheDelegate cacheCreationDelegate,
DataCacheInitializationViaCopyDelegate initializeDelegate)
See this link whether it can help you...
http://www.windowsazure.com/en-us/develop/net/how-to-guides/cache/#comment-743576866
we were missing the required blob storage container on local
devstorage. After creating the following container :
'cacheclusterconfigs' everything seems to be working now
'cacheclusterconfigs' container will be created by the service internally.you may accidentally deleted that.
Note: IMO please verify the cache name. By default you will be using the cache named 'default'.