Trouble to access azure containers from Azure/databricks - azure-blob-storage

I am having trouble to access Azure container from Azure/Databricks.
I follow instructions from this tuto, so I started to create my container and generate sas.
Then on a databricks notebook I delivered the following command
dbutils.fs.mount( source = endpoint_source, mount_point = mountPoint_folder, extra_configs = {config : sas})
where I replace endppoint_source, mountPoint_folder, sas by the following
container_name = "containertobesharedwithdatabricks"
storage_account_name = "atabricksstorageaccount"
storage_account_url = storage_account_name + ".blob.core.windows.net"
sas = "?sv=2021-06-08&ss=bfqt&srt=o&sp=rwdlacupiytfx&se=..."
endpoint_source = "wasbs://"+ storage_account_url + "/" + container_name
mountPoint_folder = "/mnt/projet8"
config = "fs.azure.sas."+ container_name + "."+ storage_account_url
but I ended with the following exception:
shaded.databricks.org.apache.hadoop.fs.azure.AzureException: shaded.databricks.org.apache.hadoop.fs.azure.AzureException: Container $root in account atabricksstorageaccount.blob.core.windows.net not found, and we can't create it using anoynomous credentials, and no credentials found for them in the configuration.
I cannot figure out why databricks cannot find the root container.
Any help would be mutch appreciated. Thanks in advance.
The storage account and folder exist, as can be seen from this capture, so I am puzzled out.

Using the same approach as yours, I got the same error:
Using the following code, I was able to mount successfully. Change the endpoint_source value to the format wasbs://<container-name>#<storage-account-name>.blob.core.windows.net.
endpoint_source = 'wasbs://data#blb2301.blob.core.windows.net'
mp = '/mnt/repro'
config = "fs.azure.sas.data.blb2301.blob.core.windows.net"
sas = "<sas>"
dbutils.fs.mount( source = endpoint_source, mount_point = mp, extra_configs = {config : sas})

My bad..., I put a "/" instead of "#" between the container_name and the storage_account_url and inverse the order, so the right synthax is:
endpoint_source = "wasbs://"+ container_name + "#" + storage_account_url

Related

Creating cross region autonomous database failing with 'message': "The following tag namespaces / keys are not authorized or not found: 'oracle-tags'"

Need help in creating cross region standby database via python have tried creating with
oci.database.models.CreateCrossRegionAutonomousDatabaseDataGuardDetails
I am unable to find an example for the same so i tried with whatever i can find through sdk documentation
response = oci_client.get_autonomous_database(autonomous_database_id=primary_db_id)
primary_db_details = response.data
def create_cross_region_standby_db(db_client, primary_db_details: oci.database.models.AutonomousDatabase):
adw_request = oci.database.models.CreateCrossRegionAutonomousDatabaseDataGuardDetails()
adw_request.compartment_id = primary_db_details.compartment_id
adw_request.db_name = primary_db_details.db_name
adw_request.data_storage_size_in_tbs = primary_db_details.data_storage_size_in_tbs
adw_request.data_storage_size_in_gbs = primary_db_details.data_storage_size_in_gbs
adw_request.cpu_core_count = primary_db_details.cpu_core_count
adw_request.db_version = primary_db_details.db_version
adw_request.db_workload = primary_db_details.db_workload
adw_request.license_model = primary_db_details.license_model
adw_request.is_mtls_connection_required = primary_db_details.is_mtls_connection_required
adw_request.is_auto_scaling_enabled = primary_db_details.is_auto_scaling_enabled
adw_request.source_id = primary_db_details.id
adw_request.subnet_id = <standby subnet id>
adw_response = db_client.create_autonomous_database(create_autonomous_database_details=adw_request)
print(adw_response.data)
adw_id = adw_response.data.id
oci.wait_until(db_client, db_client.get_autonomous_database(adw_id), 'lifecycle_state', 'AVAILABLE')
print("Created ADW {}".format(adw_id))
return adw_id
create_cross_region_standby_db is done using standby region credentials. Creation of primary db in the same region works fine.

Changing container managed authentification alias

I'm using WebSphere 7.0.0.37 and jython
I need to change the 'Container-managed authentication alias', unfortunatelly I can't find anything in API, inspecting attributes of existing DataSources or any example for that task.
I have succesfully changed the 'composant-managed authentication alias' with:
AdminConfig.modify(DataSourceProvider, '[[name "basename"] [authDataAlias "' + nameNode + '/' + aliasJaas + '" ] ')
How can i do that?
thank you!
Here is some logic which you could use to solve your problem.
# Create new alias
cellName = AdminConfig.showAttribute(AdminConfig.list("Cell"), "name")
security = AdminConfig.getid('/Cell:' + cellName + '/Security:/')
myAlias = 'blahAlias'
user = 'blah'
pswd = 'blah'
jaasAttrs = [['alias', myAlias], ['userId', user], ['password', pswd ]]
print AdminConfig.create('JAASAuthData', security, jaasAttrs)
print "Alias = " + myAlias + " was created."
# Get a reference to your DataSource (assume you know how to do this):
myDS = ...
# Set new alias on DataSource
AdminConfig.modify('MappingModule', myDS, '[[authDataAlias ' + myAlias + '] [mappingConfigAlias DefaultPrincipalMapping]]')
Note that if you can figure out how to do a given task in the Admin Console, you can use the "Command Assist" function to get a Jython snippet to do the equivalent via wsadmin. See here.

Bucketeer - Heroku's add-on s3 bucket configuration on Django

I am currently using S3 to serve static files on Heroku. The S3 bucket was created and is managed by me and its settings.py file is the following.
import os
AWS_ACCESS_KEY_ID = os.environ.get('AWS_ACCESS_KEY_ID')
AWS_SECRET_ACCESS_KEY = os.environ.get('AWS_SECRET_ACCESS_KEY')
AWS_STORAGE_BUCKET_NAME = '<MY BUCKET NAME>'
STATICFILES_STORAGE = 'storages.backends.s3boto.S3BotoStorage'
DEFAULT_FILE_STORAGE = 'storages.backends.s3boto.S3BotoStorage'
STATIC_URL = 'http://' + AWS_STORAGE_BUCKET_NAME + '.s3.amazonaws.com/'
ADMIN_MEDIA_PREFIX = STATIC_URL + 'admin/'
Which is the same as this answer and it works perfectly fine: Django + Heroku + S3
However I wanted to switch to Bucketeer which is a Heroku add-on that creates and manages a S3 bucket for you. But Bucketeer provides different parameters and the static URL looks different and I can't make it work. The URL has the following pattern: "bucketeer-heroku-shared.s3.amazonaws.com/UNIQUE_BUCKETEER_BUCKET_PREFIX/public/". So my updated code is the following.
#Bucketeer
AWS_ACCESS_KEY_ID = os.environ.get('BUCKETEER_AWS_ACCESS_KEY_ID')
AWS_SECRET_ACCESS_KEY = os.environ.get('BUCKETEER_AWS_SECRET_ACCESS_KEY')
BUCKETEER_BUCKET_PREFIX = os.environ.get('BUCKETEER_BUCKET_PREFIX')
STATICFILES_STORAGE = 'storages.backends.s3boto.S3BotoStorage'
#Bucketeer Config
STATIC_URL = 'http://bucketeer-heroku-shared.s3.amazonaws.com/' +
BUCKETEER_BUCKET_PREFIX + '/public/'
#I also tried
#STATIC_URL = 'http://bucketeer-heroku-shared.s3.amazonaws.com/' +
# BUCKETEER_BUCKET_PREFIX + '/'
And this is the error I got.
Preparing static assets
Collectstatic configuration error. To debug, run:
$ heroku run python manage.py collectstatic --noinput
Needless to say no static files were present on the app, so when I ran the suggested command I got:
boto.exception.S3ResponseError: S3ResponseError: 403 Forbidden
Which means I'm not authorized to access said bucket. Could somebody shed some light on what is going on here and how to fix it.

How to create EC2 instance through boto python code

requests = [conn.request_spot_instances(price=0.0034, image_id='ami-6989a659', count=1,type='one-time', instance_type='m1.micro')]
I used the following code. But it is not working.
Use the following code to create instance from python command line.
import boto.ec2
conn = boto.ec2.connect_to_region(
"us-west-2",
aws_access_key_id="<aws access key>",
aws_secret_access_key="<aws secret key>",
)
conn = boto.ec2.connect_to_region("us-west-2")
conn.run_instances(
"<ami-image-id>",
key_name="myKey",
instance_type="t2.micro",
security_groups=["your-security-group-here"],
)
To create an EC2 instance using Python on AWS, you need to have "aws_access_key_id_value" and "aws_secret_access_key_value".
You can store such variables in config.properties and write your code in create-ec2-instance.py file
Create a config.properties and save the following code in it.
aws_access_key_id_value='YOUR-ACCESS-KEY-OF-THE-AWS-ACCOUNT'
aws_secret_access_key_value='YOUR-SECRETE-KEY-OF-THE-AWS-ACCOUNT'
region_name_value='region'
ImageId_value = 'ami-id'
MinCount_value = 1
MaxCount_value = 1
InstanceType_value = 't2.micro'
KeyName_value = 'name-of-ssh-key'
Create create-ec2-instance.py and save the following code in it.
import boto3
def getVarFromFile(filename):
import imp
f = open(filename)
global data
data = imp.load_source('data', '', f)
f.close()
getVarFromFile('config.properties')
ec2 = boto3.resource(
'ec2',
aws_access_key_id=data.aws_access_key_id_value,
aws_secret_access_key=data.aws_secret_access_key_value,
region_name=data.region_name_value
)
instance = ec2.create_instances(
ImageId = data.ImageId_value,
MinCount = data.MinCount_value,
MaxCount = data.MaxCount_value,
InstanceType = data.InstanceType_value,
KeyName = data.KeyName_value)
print (instance[0].id)
Use the following command to execute the python code.
python create-ec2-instance.py

Why can't I set a Stream using the FtpWebRequest.GetRequestStream() method?

I have been trying to write a simple ftp client using c# in .NET 2.0 for 3 days now and am
missing something. I I create an ftpWebRequest object and set all its properies.
string uri = host + remoteFile;
System.Net.FtpWebRequest ftp = (FtpWebRequest)(FtpWebRequest.Create(uri));
ftp.Credentials = new System.Net.NetworkCredential(username, password);
ftp.KeepAlive = false;
ftp.UseBinary = true;
ftp.Method = System.Net.WebRequestMethods.Ftp.UploadFile;
But when I go to get the stream, it fails...
System.IO.Stream strm = ftp.GetRequestStream();
Here is the error: "System.Net.WebException: The remote server returned an error: (501) Syntax error in parameters or arguments."
This method SHOULD return the stream I need to write to and many examples do exactly this. I'm not sure what I'm missing. My host looks like this: "ftp://myhostname/" and I've triple checked my credentials.
Please help!
may be ftp.UseBinary = true; is not supported by server?
You are missing the "/" after the host:
string uri = host + "/" + remoteFile;
and the remote file string should look like this: file.txt without any path.

Resources