Is there a Azure CLI upload option to parallel upload files to blob storage. There is folder with lots of files. Currently the only option I have is do a for loop with below command and upload is sequentially.
az storage blob upload --file $f --container-name $CONTAINERNAME --name $FILEINFO
For now, it is not possible. With the Azure CLI 2.0 there is no option or argument to upload the contents of a specified directory to Blob storage recursively. So, Azure CLi 2.0 does not support upload files in parallel.
If you want to upload multiple files in parallel, you could use Azcopy.
AzCopy /Source:C:\myfolder /Dest:https://myaccount.blob.core.windows.net/mycontainer /DestKey:key /S
Specifying option /S uploads the contents of the specified directory to Blob storage recursively, meaning that all subfolders and their files will be uploaded as well.
As you mentioned, you could use loop to upload files, but it does not support upload files in parallel. Try following script.
export AZURE_STORAGE_ACCOUNT='PUT_YOUR_STORAGE_ACCOUNT_HERE'
export AZURE_STORAGE_ACCESS_KEY='PUT_YOUR_ACCESS_KEY_HERE'
export container_name='nyc-tlc-sf'
export source_folder='/Volumes/MacintoshDisk02/Data/Misc/NYC_TLC/yellow/2012/*'
export destination_folder='yellow/2012/'
#echo "Creating container..."
#azure storage container create $container_name
for f in $source_folder
do
echo "Uploading $f file..."
azure storage blob upload $f $container_name $destination_folder$(basename $f)
done
echo "List all blobs in container..."
azure storage blob list $container_name
echo "Completed"
Related
I am trying to upload file from my local machine to S3 bucket but I am getting an error "The user-provided path ~Downloads/index.png does not exist."
aws s3 cp ~Downloads/index.png s3://asdfbucketasdf/Temp/index_temp.png
File with name index does exists on my Downloads.
This answer might be helpful to some users new to AWS CLI on different platforms.
If you are on Linux or Linux-like systems, you can type:
aws s3 cp ~/Downloads/index.png s3://asdfbucketasdf/Temp/index_temp.png
Note that ~Downloads means a username called Downloads. What you would want is ~/Downloads, which means Downloads directory under current user's home directory.
You can type out your path fully like so (assuming your home directory was /home/matt):
aws s3 cp /home/matt/Downloads/index.png s3://asdfbucketasdf/Temp/index_temp.png
If you are on Windows, you can type:
aws s3 cp C:\Users\matt\Downloads\index.png s3://asdfbucketasdf/Temp/index_temp.png
or you can use ~ like feature in Windows:
aws s3 cp %USERPROFILE%\Downloads\index.png s3://asdfbucketasdf/Temp/index_temp.png
If you are using windows and CLI version 2:
aws s3 cp "helloworld.txt" s3://testbucket
Currently, on my Linux Docker container, I have a bash script that is able to download many many GRIB2 weather forecast files from a specific URL with a login by cookie.
Once those files are downloaded, I use an executable from the ECCODES library that I installed in that same Docker container to filter out the unneeded data in order to reduce the files size.
My company has access to the Azure platform and I would like to download and filter those GRIB2 files directly in the Azure platform so I don't have to run manually the script and to always download files and then upload them to an Azure storage.
However, I have never worked with Azure before so what I would like to know is :
would it be possible to run this script in maybe an Azure VM that would download and store directly the filtered GRIB2 files in an Azure storage (Blob storage seems to be the best option based on what I've read so far) ?
Thanks !
#!/usr/bin/env bash
export AZURE_STORAGE_ACCOUNT=your_azure_storage_account
export AZURE_STORAGE_ACCESS_KEY=your_azure_storage_access_key
# Retrieving current date to upload only new files
date=`date +%Y-%m-%dT%H:%MZ`
az login -u xxx#yyy.com -p password --output none
containerName=your_container_name
containerExists=`az storage container exists --account-name $AZURE_STORAGE_ACCOUNT --account-key $AZURE_STORAGE_ACCESS_KEY --name $containerName --output tsv`
if [[ $containerExists == "False" ]]; then
az storage container create --name $containerName # Create a container
fi
# Upload GRIB2 files to container
fileExists=`az storage blob exists --account-name $AZURE_STORAGE_ACCOUNT --account-key $AZURE_STORAGE_ACCESS_KEY --container-name $containerName --name "gfs.0p25.2019061300.f006.grib2" --output tsv`
if [[ $fileExists == "False" ]]; then
az storage blob upload --container-name $containerName --file ../Resources/Weather/Historical_Data/gfs.0p25.2019061300.f006.grib2 --name gfs.0p25.2019061300.f006.grib2
fi
I am trying to fetch the filename in a blob storage so as to use it in my script further. I tried using az storage blob list and list the blobs present there, but unsuccessful to.
Here's the command that I used:
az storage blob list --connection-string connstr --container-name "vinny/input/"
It threw error as The requested URI does not represent any resource on the server.ErrorCode: InvalidUri
Seems like it would just take the container and not the folder in it. But when i tried:
az storage blob list --connection-string connstr --container-name "vinny"
It doesn't list the file but keeps on executing.
I need to get the filename that's inside vinny/input/
Anyone got any solution for it?
I just added a --prefix option to it and was able to list the file the way I wanted. Here it goes:
az storage blob list --connection-string connstr --container-name "vinny" --prefix "Input/" --output table
cli> az storage blob list -c container_name --account-name storage_accaunt_name --output=table --num-results=* | awk or cut
then you can parse txt file with awk and cut etc..
good luck
I am trying to send a AD backup folder to a AWS s3 bucket on a windows 2016 server machine, via cmd line.
aws s3 cp “D:\WindowsImageBackup” s3://ad-backup/
However I get the below error.
Invalid length for parameter Key, value: 0, valid range: 1-inf
The folder I am trying to upload has some large files in so not sure if its too big. I have tested the bucket and smaller files work.
Thanks
You have to use --recursive option to upload a folder:
aws s3 cp --recursive “D:\WindowsImageBackup” s3://ad-backup/
Or pack that folder into a single file and upload that file with plain aws s3 cp.
I am using AWS CLI interface to manage files/objects in S3. I have thousands of objects buried in a complex system of nested folders (subfolders), I want to elevate all of the objects to the “root” of the S3 bucket, in one folder at the root of the bucket (s3://bucket/folder/file.txt).
I've tried using this command:
aws s3 s3://bucket-a/folder-a s3://bucket-a --recursive --exclude “*” --include “*.txt”
When I use the mv command, it carries over the prefixes (directory paths) of each object resulting in the same nested folder system. Here is what I want to accomplish:
Desired Result:
Where:
s3://bucket-a/folder-a/file-1.txt
s3://bucket-a/folder-b/folder-b1/file-2.txt
s3://bucket-a/folder-c/folder-c1/folder-c2/ file-3.txt
Output:
s3://bucket-a/file-1.txt
s3://bucket-a/file-2.txt
s3://bucket-a/file-3.txt
I have been told, that I need to use a bash script to accomplish my desired result. Here is a sample script that was provided to me:
#!/bin/bash
#BASH Script to move objects without directory structure
bucketname='my-bucket'
for key in $(aws s3api list-objects --bucket "${my-bucket}" --query "Contents[].{Object:Key}" --output text) ;
do
echo "$key"
FILENAME=$($key | awk '{print $NF}' FS=/)
aws s3 cp s3://$my-bucket/$key s3://$my-bucket/my-folder/$FILENAME
done
When I run this bash script, I get an error:
A client error (AccessDenied) occurred when calling the ListObjects operation: Access Denied
I tested the connection with another aws s3 command and confirmed that it works. I added policies to the user to include all privledges to s3, I have no idea what I am doing wrong here.
Any help would be greatly appreciated.
That script looks messed up, no means on setting a variable called bucketname and trying to use another one called my-bucket, what happens if you try this ?
#!/bin/bash
#BASH Script to move objects without directory structure
bucketname='my-bucket'
for key in $(aws s3api list-objects --bucket "${bucketname}" --query "Contents[].{Object:Key}" --output text) ;
do
echo "$key"
FILENAME=$($key | awk '{print $NF}' FS=/)
aws s3 cp s3://$bucketname/$key s3://$bucketname/my-folder/$FILENAME
done