I am currently using S3 to serve static files on Heroku. The S3 bucket was created and is managed by me and its settings.py file is the following.
import os
AWS_ACCESS_KEY_ID = os.environ.get('AWS_ACCESS_KEY_ID')
AWS_SECRET_ACCESS_KEY = os.environ.get('AWS_SECRET_ACCESS_KEY')
AWS_STORAGE_BUCKET_NAME = '<MY BUCKET NAME>'
STATICFILES_STORAGE = 'storages.backends.s3boto.S3BotoStorage'
DEFAULT_FILE_STORAGE = 'storages.backends.s3boto.S3BotoStorage'
STATIC_URL = 'http://' + AWS_STORAGE_BUCKET_NAME + '.s3.amazonaws.com/'
ADMIN_MEDIA_PREFIX = STATIC_URL + 'admin/'
Which is the same as this answer and it works perfectly fine: Django + Heroku + S3
However I wanted to switch to Bucketeer which is a Heroku add-on that creates and manages a S3 bucket for you. But Bucketeer provides different parameters and the static URL looks different and I can't make it work. The URL has the following pattern: "bucketeer-heroku-shared.s3.amazonaws.com/UNIQUE_BUCKETEER_BUCKET_PREFIX/public/". So my updated code is the following.
#Bucketeer
AWS_ACCESS_KEY_ID = os.environ.get('BUCKETEER_AWS_ACCESS_KEY_ID')
AWS_SECRET_ACCESS_KEY = os.environ.get('BUCKETEER_AWS_SECRET_ACCESS_KEY')
BUCKETEER_BUCKET_PREFIX = os.environ.get('BUCKETEER_BUCKET_PREFIX')
STATICFILES_STORAGE = 'storages.backends.s3boto.S3BotoStorage'
#Bucketeer Config
STATIC_URL = 'http://bucketeer-heroku-shared.s3.amazonaws.com/' +
BUCKETEER_BUCKET_PREFIX + '/public/'
#I also tried
#STATIC_URL = 'http://bucketeer-heroku-shared.s3.amazonaws.com/' +
# BUCKETEER_BUCKET_PREFIX + '/'
And this is the error I got.
Preparing static assets
Collectstatic configuration error. To debug, run:
$ heroku run python manage.py collectstatic --noinput
Needless to say no static files were present on the app, so when I ran the suggested command I got:
boto.exception.S3ResponseError: S3ResponseError: 403 Forbidden
Which means I'm not authorized to access said bucket. Could somebody shed some light on what is going on here and how to fix it.
Related
I am having trouble to access Azure container from Azure/Databricks.
I follow instructions from this tuto, so I started to create my container and generate sas.
Then on a databricks notebook I delivered the following command
dbutils.fs.mount( source = endpoint_source, mount_point = mountPoint_folder, extra_configs = {config : sas})
where I replace endppoint_source, mountPoint_folder, sas by the following
container_name = "containertobesharedwithdatabricks"
storage_account_name = "atabricksstorageaccount"
storage_account_url = storage_account_name + ".blob.core.windows.net"
sas = "?sv=2021-06-08&ss=bfqt&srt=o&sp=rwdlacupiytfx&se=..."
endpoint_source = "wasbs://"+ storage_account_url + "/" + container_name
mountPoint_folder = "/mnt/projet8"
config = "fs.azure.sas."+ container_name + "."+ storage_account_url
but I ended with the following exception:
shaded.databricks.org.apache.hadoop.fs.azure.AzureException: shaded.databricks.org.apache.hadoop.fs.azure.AzureException: Container $root in account atabricksstorageaccount.blob.core.windows.net not found, and we can't create it using anoynomous credentials, and no credentials found for them in the configuration.
I cannot figure out why databricks cannot find the root container.
Any help would be mutch appreciated. Thanks in advance.
The storage account and folder exist, as can be seen from this capture, so I am puzzled out.
Using the same approach as yours, I got the same error:
Using the following code, I was able to mount successfully. Change the endpoint_source value to the format wasbs://<container-name>#<storage-account-name>.blob.core.windows.net.
endpoint_source = 'wasbs://data#blb2301.blob.core.windows.net'
mp = '/mnt/repro'
config = "fs.azure.sas.data.blb2301.blob.core.windows.net"
sas = "<sas>"
dbutils.fs.mount( source = endpoint_source, mount_point = mp, extra_configs = {config : sas})
My bad..., I put a "/" instead of "#" between the container_name and the storage_account_url and inverse the order, so the right synthax is:
endpoint_source = "wasbs://"+ container_name + "#" + storage_account_url
I'm trying to push slate docs to 2 different S3 buckets based on the environment.
But it's complaining that s3_sync is not a parameter for middleman.
I have mentioned the S3 bucket in the environment using config.rb but still I'm getting the above issue when I run bundle exec middleman s3_sync --verbose --environment=internal
config.rb:
configure :internal do
s3_sync.bucket = ENV['INTERNAL_DOCS_AWS_BUCKET'] # The name of the internal docs S3 bucket you are targeting. This is globally unique.
end
activate :s3_sync do |s3_sync|
s3_sync.bucket = ENV['DOCS_AWS_BUCKET'] # The name of the S3 bucket you are targeting. This is globally unique.
s3_sync.region = ENV['DOCS_AWS_REGION'] # The AWS region for your bucket.
s3_sync.aws_access_key_id = ENV['DOCS_AWS_ACCESS_KEY_ID']
s3_sync.aws_secret_access_key = ENV['DOCS_AWS_SECRET_ACCESS_KEY']
s3_sync.prefer_gzip = true
s3_sync.path_style = true
s3_sync.reduced_redundancy_storage = false
s3_sync.acl = 'public-read'
s3_sync.encryption = false
s3_sync.prefix = ''
s3_sync.version_bucket = false
s3_sync.index_document = 'index.html'
s3_sync.error_document = '404.html'
end
Error:
bundler: failed to load command: middleman
(/usr/local/bundle/bin/middleman) NameError: undefined local variable
or method `s3_sync' for #Middleman::ConfigContext:0x0000561eca099a40
s3_sync is only defined within the block of activate :s3_sync.
It is undefined within the configure :internal block.
A solution might look like the following, using environment? or environment
activate :s3_sync do |s3_sync|
s3_sync.bucket = if environment?(:internal)
ENV['INTERNAL_DOCS_AWS_BUCKET']
else
ENV['DOCS_AWS_BUCKET']
end
s3_sync.region = ENV['DOCS_AWS_REGION']
# ...
end
I'm running into an issue uploading to S3 with version 2 of the sdk.
When running:
Aws.config.update({
region: 'us-east-1',
credentials: Aws::Credentials.new(credentials['key'],credentials['secret'],
s3_server_side_encryption: :aes256)
})
s3 = Aws::S3::Resource.new
bucket = 'VandalayIndustriesAccountingData'
s3_file_path = "folder/filename.tar.gz"
s3_object = s3.bucket(bucket).object(s3_file_path)
s3_object.upload_file(artifact_location)
I get the following error:
Aws::S3::Errors::InvalidToken
-----------------------------
The provided token is malformed or otherwise invalid.
When I remove the s3_server_side_encryption setting it changes to an access denied error.
I've been trying to find documentation around doing this with v2 of the API, but everything online seems to rely on the bucket object having a write method which doesn't seem to exist in v2 of the API.
http://docs.aws.amazon.com/AmazonS3/latest/dev/SSEUsingRubySDK.html
I'm likely just not finding the correct document in the v2 api. I'd like to avoid using v1 and v2 of the api but may fall back to that.
upload_file takes arguments similar to write
Aws.config.update({
region: 'us-east-1',
credentials: Aws::Credentials.new(credentials['key'],credentials['secret'],
)
})
s3 = Aws::S3::Resource.new
bucket = 'VandalayIndustriesAccountingData'
s3_file_path = "folder/filename.tar.gz"
s3_object = s3.bucket(bucket).object(s3_file_path)
s3_object.upload_file(artifact_location, server_side_encryption: :AES256)
Can any one provide me a good documentation for uploading files to S3 using asw-sdk Version 2. I checked out the main doc and in v1 we used to do like
s3 = AWS::S3.new
obj = s3.buckets['my-bucket']
Now in v2 when I try as
s3 = Aws::S3::Client.new
am ending up with
Aws::Errors::MissingRegionError: missing region; use :region option or export region name to ENV['AWS_REGION']
Can anyone help me with this?
As per official documentation:
To use the Ruby SDK, you must configure a region and credentials.
Therefore,
s3 = Aws::S3::Client.new(region:'us-west-2')
Alternatively, a default region can be loaded from one of the following locations:
Aws.config[:region]
ENV['AWS_REGION']
Here's a complete S3 demo on aws v2 gem that worked for me:
Aws.config.update(
region: 'us-east-1',
credentials: Aws::Credentials.new(
Figaro.env.s3_access_key_id,
Figaro.env.s3_secret_access_key
)
)
s3 = Aws::S3::Client.new
resp = s3.list_buckets
puts resp.buckets.map(&:name)
Gist
Official list of AWS region IDs here.
If you're unsure of the region, the best guess would be US Standard, which has the ID us-east-1 for config purposes, as shown above.
If you were using a aws.yml file for your credentials in Rails, you might want to create a file config/initializers/aws.rb with the following content:
filename = File.expand_path(File.join(Rails.root, "config", "aws.yml"))
config = YAML.load_file(filename)
aws_config = config[Rails.env.to_s].symbolize_keys
Aws.config.update({
region: aws_config[:region],
credentials: Aws::Credentials.new(aws_config[:access_key_id], aws_config[:secret_access_key])
})
The config/aws.yml file would need to be adapter to include the region.
development: &development
region: 'your region'
access_key_id: 'your access key'
secret_access_key: 'your secret access key'
production:
<<: *development
I am using aws-sdk-ruby for deleting a file saved in a bucket in my amazon s3 account, but i can't figure out why i am able to delete the desired file from S3 bucket using the following code.
this is my code
require 'aws-sdk-v1'
require 'aws-sdk'
ENV['AWS_ACCESS_KEY_ID'] = "XXXXXXX"
ENV["AWS_SECRET_ACCESS_KEY"] = '/ZZZZZZZZ'
ENV['AWS_REGION'] = 'us-east-1'
s3 = Aws::S3::Resource.new
bucket = s3.bucket('some-bucket')
obj = bucket.object('https://s3.amazonaws.com/some-bucket/38ac8226-fa72-4aee-8c3d-a34a1db77b91/some_image.jpg')
obj.delete
The documentation tells that it should look like this:
s3 = Aws::S3.new
bucket = s3.buckets['some-bucket']
object = bucket.objects['38ac8226-fa72-4aee-8c3d-a34a1db77b91/some_image.jpg']
object.delete
Please note:
the square brackets,
that the object's key doesn't include the domain and
instead of creating an instance of Aws::S3::Resource create an instance of AWS::S3
If you use API version 3 (aws-sdk-s3 (1.81.1)) you should do something like below:
s3 = Aws::S3::Client.new
s3.delete_object(bucket: 'bucket_name', key: 'bucket_folder/file.txt')
it should be:
objs = bucket.objects('https://s3.amazonaws.com/some-bucket/38ac8226-fa72-4aee-8c3d-a34a1db77b91/some_image.jpg')
objs.each {|obj| obj.delete}
With the aws-sdk v2: I had to do this: (see Doc)
$s3.bucket("my-bucket").objects(prefix: 'my_folder/').batch_delete!
(delete is deprecated in favor of batch_delete)
Useful post: https://ruby.awsblog.com/post/Tx1H87IVGVUMIB5/Using-Resources