Cannot use azcopy from one blob to another with SAS - azure-blob-storage

I have 2 blob storages, one in eastus, one in canadaeast, and I want to copy one .vhd from eastus to canadaeast.
I go to the the eastus, on the blob I want to copy. I open the contextual menu, click on generate SAS. I generate the Token with read/write/create/delete rights and copy the URL provided.
I follow the documentation on https://learn.microsoft.com/en-gb/azure/storage/common/storage-use-azcopy-blobs
I downloaded azcopy, and type:
azcopy copy "source?SASToken" "destination"
I get :
===== RESPONSE ERROR (ServiceCode=ResourceNotFound) =====
Description=The specified resource does not exist.
RequestId:60890ddf-601e-0040-5a4b-62d6ba000000
Time:2019-09-03T11:34:08.5367479Z, Details:
Code: ResourceNotFound
I generate a SAS token with read/write/create/delete rights on the destination account, then type:
azcopy copy "source?SASToken" "destination?SASToken"
I get the following result:
Job a30eaae3-1dfd-f94d-5237-002ababea22c summary
Elapsed Time (Minutes): 0.0334
Total Number Of Transfers: 1
Number of Transfers Completed: 0
Number of Transfers Failed: 1
Number of Transfers Skipped: 0
TotalBytesTransferred: 0
Final Job Status: Failed
In the generated log file, I get
RESPONSE Status: 400 The value for one of the HTTP headers is not in the correct format.
Content-Length: [325]
Content-Type: [application/xml]
Date: [Tue, 03 Sep 2019 10:07:21 GMT]
Server: [Windows-Azure-Blob/1.0 Microsoft-HTTPAPI/2.0]
X-Ms-Error-Code: [InvalidHeaderValue]
X-Ms-Request-Id: [1ef8bbe7-a01e-0019-313f-62d33c000000]
X-Ms-Version: [2018-11-09]
I am stuck : I don't know how to copy one blob to another on a different location, and everything I tried failed.
What do I do wrong? Can you help?

You should note the following limitations in the current azcopy release when you copy blobs between storage accounts.
Only accounts that don't have a hierarchical namespace are supported.
You have to append a SAS token to each source URL. If you provide authorization credentials by using Azure Active Directory (AD), you
can omit the SAS token only from the destination URL.
Premium block blob storage accounts don't support access tiers. Omit the access tier of a blob from the copy operation by setting the
s2s-preserve-access-tier to false (For example:
--s2s-preserve-access-tier=false).
You could verify if the two storage accounts meet the above requirements, especially the third one. I can reproduce your issue when I copy blobs from storageV2 to storageV1, then I append the --s2s-preserve-access-tier=false, it works.

According to my test, we can use the command "azcopy copy "source?SASToken" "destination?SASToken"" to copy one blob to another blob. The detailed steps are as below.
1. Generate access token
Copy blob

Related

JMeter, In what way could a 20M file upload give an 500 error: "Cannot delete temporary file chunk"?

Context: Trying to upload a file to a web application
Let's assume I'm only using the basic options:
HTTP Request Sampler:
Method: Post
Protocol: http
Path: /api/file/upload
Follow Redirects: checked
Use Keep-Alive: checked
Use multipart/form-data: checked
Files Upload:
File Path: C:\Users\etc...etc
Parameter Name: file
MIME: image/jpeg
Cookies are set with Cookie Manager, Login is also set.
Uploading small files (130KB) works fine like this, Bigger files throw an error 500: "Cannot delete temporary file chunk."
On-website upload works fine and uses resumable.js (which is also the one throwing this error I assume)
I am assuming this is due to chunking, since that's basically the only major difference between what I have tried. Does anyone have any insight on this?
Edit: Using the image photoGood that is "chunked"/split in 2 blocks I can also form POSTs with these parameters:
resumableIdentifier 20702285-photoGoodjpg
resumableFilename photoGood.jpg
resumableType image/jpeg
resumableRelativePath photoGood.jpg
resumableChunkSize 1048576
resumableChunkNumber 1
resumableTotalChunks 2
resumableTotalSize 1859876
resumableChunkSize 887520
However, only the ChunkNumber 1 will be used, as in, the chunks are not joined together on the server.
I think your request is missing some mandatory parameters like:
resumableChunkNumber
resumableChunkSize
resumableTotalSize
resumableIdentifier
resumableFilename
resumableRelativePath
etc.
Check out documentation at http://www.resumablejs.com/ website for more details.
If you're able to successfully perform upload using browser you should be able to record the associated request(s) using JMeter's HTTP(S) Test Script Recorder, just make sure to copy your PDF file to "bin" folder of your JMeter installation before uploading the file and JMeter will generate the relevant HTTP Request sampler(s) under the Recording Controller.
Check out Recording File Uploads with JMeter article for more details.
It would also be good to capture the traffic from JMeter and the real browser using a 3rd-party sniffer tool like Fiddler or Wireshark, this way you will be able to identify the differences. Once you configure JMeter to send the same requests as the real browser - the whole transaction will be successful.

FAILED_PRECONDITION when trying to create a new Google API project

I am getting an error when attempting to create a new project for Google API at https://code.google.com/apis/console
I was hoping the error was temporary, but I have been unable to create a new project for a couple weeks now.
The error seems to have changed, as it used to include server ip information and a lot of other data. An example with some potentially private information removed:
APPLICATION_ERROR;google.cloudresourcemanager.projects.v1beta1/DeveloperProjects.CreateProject;com.google.apps.framework.request.StatusException:
generic::FAILED_PRECONDITION:
;AppErrorCode=9;StartTimeMs=1489595147198;tcp;Deadline(sec)=50.0;ResFormat=UNCOMPRESSED;ServerTimeSec=0.027545452117919922;LogBytes=256;FailFast;EffSecLevel=none;ReqFormat=UNCOMPRESSED;ReqID=removed;GlobalID=removed;Server=ip:port
Now the error is a lot shorter, although still seems to be related to the same cause:
com.google.apps.framework.request.StatusException: generic::FAILED_PRECONDITION:
The spinner in the dashboard appears to spin forever, while the error appears underneath alerts after a few seconds. I have tried numerous project names and all fail with the same error.
Is there some type of quota I am missing that is preventing this? The quota menu item requires me to select a project, which I don't have any.
Clicking on the error brings me to a page with the following message:
You don't have permissions to perform the action on the selected resource.
Make sure that the Google Developers Console is on for the user that is trying to create the project.
Admin.google.com > Apps > Additional Google services > Google Developers Console and turn on for any org or user that needs it.
Looks like there's another possible cause, in addition to the one that #AndrewL provided in his solution.
Short answer:
I ran into the same error when I attempted to associate a new GCP project with our billing account via Terraform. Our billing account has a default limit of 5 projects (which we had already met), and this blocked the association of new accounts and generated the Error 400: Precondition check failed., failedPreconditionerror message. To fix, I had to remove/delete one of the already associated projects before I could add a new one.
Long answser:
Here is the error message I encountered in Terraform (sensitive data and IDs redacted):
An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
~ update in-place
Terraform will perform the following actions:
~ module.<MY MODULE PATH>.project
billing_account: "" => "<MY BILLING ACCOUNT ID>"
Plan: 0 to add, 1 to change, 0 to destroy.
Do you want to perform these actions?
Terraform will perform the actions described above.
Only 'yes' will be accepted to approve.
Enter a value: yes
module.<MY MODULE PATH>.project: Modifying... (ID: <MY PROJECT ID>)
billing_account: "" => "<MY BILLING ACCOUNT ID>"
Error: Error applying plan:
1 error(s) occurred:
* module.<MY MODULE PATH>.project: 1 error(s) occurred:
* google_project.project: Error setting billing account "<MY BILLING ACCOUNT ID>" for project "projects/<MY PROJECT ID>": googleapi: Error 400: Precondition check failed., failedPrecondition
Terraform does not automatically rollback in the face of errors.
Instead, your Terraform state file has been partially updated with
any resources that successfully completed. Please address the error
above and apply again to incrementally change your infrastructure.
My billing account was at its maximum of 5 projects. As a test, I removed one of the projects and then ran Terraform again. It then successfully added the new project to the billing account. To double check, I attempted to add yet another new project to the billing account (to push the amount to 6) and then received the same error message again.
Straight up deleting one of the associated projects also works.
It stands to reason that requesting a limit increase for your billing account's associated projects will also fix this issue.
Another possible reason for this error can be accessing the .xlsx file. If you are accessing this file using Google spreadsheet then you need to save the .xlsx file in to save as google sheets by choosing the option available in the File menu at the top.
Thanks
Another possible reason for this is Organization policy, denying the particular API on ORG/Folder/project level.
For me the error was working because I was trying to read the file from a shared google folder. Check if the drive where the file lives is your drive and not a shared one.

Google Drive Rest API V3 Downloading large files with alt=media option (>25MB)

I am trying to figure out how I can download a large file directly in the browser via the Google Drive API V3.
I'm using option 1 which is : Download a file — files.get with alt=media file resource
When I send my request ( https://www.googleapis.com/drive/v3/files/[fileId]?alt=media), I got the content of the file as a response of it.
Currently I'm creating a blob file with the response of my query, then reading the blob file to start the download on the browser.
The problem with that is, reading the blob file consumes a lot of memory and if the file's size is more than 10MB, then browser crashes.
So my question is : how can I start the download on browsers via the Google APIs like mention on the screenshot bellow? See expected result
Try downloading the file in chunks using the range http header. See https://developers.google.com/drive/v3/web/manage-downloads "Partial download"

upload huge numbers of files to blob

Do you know best way to upload too many files to Azure Blob container?
I am currently do something to upload multiple files to Azure blob storage. The number of files may be huge, like 30,000 or more(each file could be sized of 10KB~1MB). Firstly, I have a list of files locations, then I would use Parallel.Foreach to upload the files. code snippet like this:
`List locations=...
Parallel.Foreach(locations, location=>
{
...
UploadFromStream(...);
...
});`
The codes run to inconsistence results.
Sometimes it runs well, I can see all files uploaded to the Azure blob container.
Sometimes, I will got exceptions like this:
Server failed to authenticate the request. Make sure the value of Authorization header is formed correctly including the signature., Inner Exception: The remote server returned an error: (403) Forbidden
Sometimes, I got a timeout exception.
I have worked against the issue for several days, unfortunatly, I havn't got a perfect solution yet. So I want to know how do you do when you handling similar scenario, how do you do when upload too many files to Azure blob storage?
Finally, I have not found what's wrong with my code.
However, I have found solution for this issue. I expire Parallel.Foreach, just use common foreach. Then I use BeginuploadFromStream method instead of UploadFromStream, it actually upload files asynchronously.
So far, it runs prefectly, more stable, without any exception happens.

FTP Failed Transfers (Index page)

I am a web designer and have made updates to an index.html page for a client. I am using the newest version of FileZilla. I have tried uploading the newest version of this page from the local to the remote site.
Every time I try to complete this task I receive the following:
Response: 227 Entering Passive Mode (64,148,130,80,12,243).
Command: STOR index.html
Response: 550 index.html: Access is denied.
However, I AM able to upload other files to the server for example new images.
I have not made ANY configuration changes to the hosting account.
Any help/information would be great.
maybe the login you are using isn't in the same group that the group that uploaded the original index.html file and so don't have the good permissions to do it.
For new files, it will no cause problems because you're the owner, but for the existing one, if you aren't in the same group and don't have the modification right, you will have this error.

Resources