I use this code to upload files to S3:
s3 = Aws::S3::Resource.new(...)
obj = s3.bucket(s3_bucket).object("final/path")
success = obj.upload_file('my/local/file')
How can I change the storage class to Standard-Infrequent Access?
Try to pass storage_class option to #upload_file:
success = obj.upload_file('my/local/file', storage_class: "STANDARD_IA")
Related
I successfully built logic app where whenever a blob is added in container-one, it gets copied to container-2. However it fails when any blobs larger than 50 MB (default size) is uploaded.
Could you please guide.
Blobs are added via rest api.
Below is the flow,
Currently, the maximum file size with disabled chunking is 50MB. One of the workarounds is to use Azure functions in order to transfer the files from one container to another.
Below is the sample Python Code that worked for me when I'm trying to transfer files from One container to Another
from azure.storage.blob import BlobClient, BlobServiceClient
from azure.storage.blob import ResourceTypes, AccountSasPermissions
from azure.storage.blob import generate_account_sas
from datetime import datetime,timedelta
connection_string = '<Your Connection String>'
account_key = '<Your Account Key>'
source_container_name = 'container1'
blob_name = 'samplepdf.pdf'
destination_container_name = 'container2'
# Create client
client = BlobServiceClient.from_connection_string(connection_string)
# Create sas token for blob
sas_token = generate_account_sas(
account_name = client.account_name,
account_key = account_key,
resource_types = ResourceTypes(object=True),
permission= AccountSasPermissions(read=True),
expiry = datetime.utcnow() + timedelta(hours=4)
)
# Create blob client for source blob
source_blob = BlobClient(
client.url,
container_name = source_container_name,
blob_name = blob_name,
credential = sas_token
)
# Create new blob and start copy operation
new_blob = client.get_blob_client(destination_container_name, blob_name)
new_blob.start_copy_from_url(source_blob.url)
RESULT:
REFERENCES:
General Limits
How to copy a blob from one container to another container using Azure Blob storage SDK
I have the following code:
bucket = get_bucket('bucket-name')
blob = bucket.blob(os.path.join(*pieces))
blob.upload_from_string('test')
blob.make_public()
result = blob.public_url
# result is `<Mock name='mock().get_bucket().blob().public_url`
And I would do like to mock the result of public_url, my unit test code is something like this
with ExitStack() as st:
from google.cloud import storage
blob_mock = mock.Mock(spec=storage.Blob)
blob_mock.public_url.return_value = 'http://'
bucket_mock = mock.Mock(spec=storage.Bucket)
bucket_mock.blob.return_value = blob_mock
storage_client_mock = mock.Mock(spec=storage.Client)
storage_client_mock.get_bucket.return_value = bucket_mock
st.enter_context(
mock.patch('google.cloud.storage.Client', storage_client_mock))
my_function()
Is there something like FakeRedis or moto for Google Storage, so I can mock google.cloud.storage.Blob.public_url?
I found this fake gcs server written in Go which can be run within a Docker container and consumed by the Python library. See Python examples.
I'm running into an issue uploading to S3 with version 2 of the sdk.
When running:
Aws.config.update({
region: 'us-east-1',
credentials: Aws::Credentials.new(credentials['key'],credentials['secret'],
s3_server_side_encryption: :aes256)
})
s3 = Aws::S3::Resource.new
bucket = 'VandalayIndustriesAccountingData'
s3_file_path = "folder/filename.tar.gz"
s3_object = s3.bucket(bucket).object(s3_file_path)
s3_object.upload_file(artifact_location)
I get the following error:
Aws::S3::Errors::InvalidToken
-----------------------------
The provided token is malformed or otherwise invalid.
When I remove the s3_server_side_encryption setting it changes to an access denied error.
I've been trying to find documentation around doing this with v2 of the API, but everything online seems to rely on the bucket object having a write method which doesn't seem to exist in v2 of the API.
http://docs.aws.amazon.com/AmazonS3/latest/dev/SSEUsingRubySDK.html
I'm likely just not finding the correct document in the v2 api. I'd like to avoid using v1 and v2 of the api but may fall back to that.
upload_file takes arguments similar to write
Aws.config.update({
region: 'us-east-1',
credentials: Aws::Credentials.new(credentials['key'],credentials['secret'],
)
})
s3 = Aws::S3::Resource.new
bucket = 'VandalayIndustriesAccountingData'
s3_file_path = "folder/filename.tar.gz"
s3_object = s3.bucket(bucket).object(s3_file_path)
s3_object.upload_file(artifact_location, server_side_encryption: :AES256)
I'm trying to upload in image to s3 using the aws-sdk. I'm able to retrieve my bucket
s3 = Aws::S3::Client.new
resp = s3.list_buckets
bucket = resp.buckets.select {|x| x.name == "mybucket"}[0]
>> bucket
>> #<struct Aws::S3::Types::Bucket name="mybucket", creation_date=2015-09-05 19:23:49 UTC>
I now have my bucket. Looking at the aws documentation and heroku's documentation I should be able to call bucket.presigned_post, however I get NoMethodError: undefined method 'presigned_post' for #<Aws::S3::Types::Bucket:0x007ff583bece10>
What am I missing here? Do I not have the correct s3 bucket object?
Aws::S3::Types::Bucket is not the same as Aws::S3::Bucket. Only the latter has #presigned_post. It appears that Aws::S3::Client#list_buckets returns information about buckets, not the bucket objects (which you have to create yourself).
Have you tried:
bucket = Aws::S3::Bucket.new('mybucket', client: s3)
I am using aws-sdk-ruby for deleting a file saved in a bucket in my amazon s3 account, but i can't figure out why i am able to delete the desired file from S3 bucket using the following code.
this is my code
require 'aws-sdk-v1'
require 'aws-sdk'
ENV['AWS_ACCESS_KEY_ID'] = "XXXXXXX"
ENV["AWS_SECRET_ACCESS_KEY"] = '/ZZZZZZZZ'
ENV['AWS_REGION'] = 'us-east-1'
s3 = Aws::S3::Resource.new
bucket = s3.bucket('some-bucket')
obj = bucket.object('https://s3.amazonaws.com/some-bucket/38ac8226-fa72-4aee-8c3d-a34a1db77b91/some_image.jpg')
obj.delete
The documentation tells that it should look like this:
s3 = Aws::S3.new
bucket = s3.buckets['some-bucket']
object = bucket.objects['38ac8226-fa72-4aee-8c3d-a34a1db77b91/some_image.jpg']
object.delete
Please note:
the square brackets,
that the object's key doesn't include the domain and
instead of creating an instance of Aws::S3::Resource create an instance of AWS::S3
If you use API version 3 (aws-sdk-s3 (1.81.1)) you should do something like below:
s3 = Aws::S3::Client.new
s3.delete_object(bucket: 'bucket_name', key: 'bucket_folder/file.txt')
it should be:
objs = bucket.objects('https://s3.amazonaws.com/some-bucket/38ac8226-fa72-4aee-8c3d-a34a1db77b91/some_image.jpg')
objs.each {|obj| obj.delete}
With the aws-sdk v2: I had to do this: (see Doc)
$s3.bucket("my-bucket").objects(prefix: 'my_folder/').batch_delete!
(delete is deprecated in favor of batch_delete)
Useful post: https://ruby.awsblog.com/post/Tx1H87IVGVUMIB5/Using-Resources