How to generate SAS token using python legacy SDK(2.1) without account_key or connection_string - azure-blob-storage

I am using python3.6 and azure-storage-blob(version = 1.5.0) and trying to use user assigned managed identity to connect to my azure storage blob from an Azure VM.The problem I am facing is I want to generate the SAS token to form a downloadable url.
I am using blob_service = BlockBlobService(account name,token credential) to authenticate. But I am not able to find any methods which let me generate SAS token without supplying the account key.
Also not seeing any way of using the user delegation key as is available in the new azure-storage-blob (versions>=12.0.0). Is there any workaround or I will need to upgrade the azure storage library at the end.

I tried to reproduce in my environment to generate SAS token without account key or connection string got result successfully.
Code:
import datetime as dt
import json
import os
from azure.identity import DefaultAzureCredential
from azure.storage.blob import (
BlobClient,
BlobSasPermissions,
BlobServiceClient,
generate_blob_sas,
)
credential = DefaultAzureCredential(exclude_shared_token_cache_credential=True)
storage_acct_name = "Accountname"
container_name = "containername"
blob_name = "Filename"
url = f"https://<Accountname>.blob.core.windows.net"
blob_service_client = BlobServiceClient(url, credential=credential)
udk = blob_service_client.get_user_delegation_key(
key_start_time=dt.datetime.utcnow() - dt.timedelta(hours=1),
key_expiry_time=dt.datetime.utcnow() + dt.timedelta(hours=1))
sas = generate_blob_sas(
account_name=storage_acct_name,
container_name=container_name,
blob_name=blob_name,
user_delegation_key=udk,
permission=BlobSasPermissions(read=True),
start = dt.datetime.utcnow() - dt.timedelta(minutes=15),
expiry = dt.datetime.utcnow() + dt.timedelta(hours=2),
)
sas_url = (
f'https://{storage_acct_name}.blob.core.windows.net/'
f'{container_name}/{blob_name}?{sas}'
)
print(sas_url)
Output:
Make sure you need to add storage blob data contributor role as below:

Related

Azure - Copy LARGE blobs from one container to other using logic apps

I successfully built logic app where whenever a blob is added in container-one, it gets copied to container-2. However it fails when any blobs larger than 50 MB (default size) is uploaded.
Could you please guide.
Blobs are added via rest api.
Below is the flow,
Currently, the maximum file size with disabled chunking is 50MB. One of the workarounds is to use Azure functions in order to transfer the files from one container to another.
Below is the sample Python Code that worked for me when I'm trying to transfer files from One container to Another
from azure.storage.blob import BlobClient, BlobServiceClient
from azure.storage.blob import ResourceTypes, AccountSasPermissions
from azure.storage.blob import generate_account_sas
from datetime import datetime,timedelta
connection_string = '<Your Connection String>'
account_key = '<Your Account Key>'
source_container_name = 'container1'
blob_name = 'samplepdf.pdf'
destination_container_name = 'container2'
# Create client
client = BlobServiceClient.from_connection_string(connection_string)
# Create sas token for blob
sas_token = generate_account_sas(
account_name = client.account_name,
account_key = account_key,
resource_types = ResourceTypes(object=True),
permission= AccountSasPermissions(read=True),
expiry = datetime.utcnow() + timedelta(hours=4)
)
# Create blob client for source blob
source_blob = BlobClient(
client.url,
container_name = source_container_name,
blob_name = blob_name,
credential = sas_token
)
# Create new blob and start copy operation
new_blob = client.get_blob_client(destination_container_name, blob_name)
new_blob.start_copy_from_url(source_blob.url)
RESULT:
REFERENCES:
General Limits
How to copy a blob from one container to another container using Azure Blob storage SDK

pygithub - how to create a check run?

I am trying to create a check run in github (reference: )
import jwt
import time
from github import Github
pem_file = "ci-results.2020-11-27.private-key.pem"
installation_id = 13221707
app_id = 90466
time_since_epoch_in_seconds = int(time.time())
payload = {
# issued at time
'iat': time_since_epoch_in_seconds,
# JWT expiration time (10 minute maximum)
'exp': time_since_epoch_in_seconds + (5 * 60),
# GitHub App's identifier
'iss': app_id
}
this_jwt = jwt.encode(payload, open(pem_file, 'r').read().strip().encode(), 'RS256').decode()
gh = Github(jwt=this_jwt)
gh.get_app("ci-results")
# installation = gh.get_installation(installation_id)
# for repo in (installation.get_repos()):
# print (repo)
I am getting the following error:
github.GithubException.BadCredentialsException: 401 {"message": "Bad credentials", "documentation_url": "https://docs.github.com/rest"}
I checked that JWT created is correct by using in REST API directly. I think I am not using the pyGitHUb correctly for JWT authentication, any idea what could I be doing wrong?

Access Always Encrypted data from Databricks

I have a table in Azure SQL managed instance with 'Always Encrypted' columns. I stored the Column and master keys in Azure key Vault.
My first, question is - How do I access the decrypted data in Azure SQL from Databricks. For that I connected to Azure SQL via jdbc. For Username and Password, I am passing my credentials manually
val jdbcHostname = "XXXXXXXXXXX.database.windows.net"
val jdbcPort = 1433
val jdbcDatabase = "ABCD"
val jdbcUrl = s"jdbc:sqlserver://${jdbcHostname}:${jdbcPort};database=${jdbcDatabase}"
// Create a Properties() object to hold the parameters.
import java.util.Properties
val connectionProperties = new Properties()
connectionProperties.put("user", s"${jdbcUsername}")
connectionProperties.put("password", s"${jdbcPassword}")
val driverClass = "com.microsoft.sqlserver.jdbc.SQLServerDriver"
connectionProperties.setProperty("Driver", driverClass)
import java.sql.DriverManager
val connection = DriverManager.getConnection(jdbcUrl, jdbcUsername, jdbcPassword)
connection.isClosed()
val user = spark.read.jdbc(jdbcUrl, "dbo.bp_mp_user_test", connectionProperties)
display(user)
When I do this I am able to display the data, but it is encrypted data. How do I see the decrypted data
I am new to Azure and Databricks combo, so still learning Azure/Microsoft stack. Are there any other forms of jdbc connection syntax that allows you to decrypt.
I have the keys in Azure Keyvault. So how do I make use of those keys and the security associated with those keys, that way when someone accesses this table, it shows encrypted/decrypted data in the Databricks when accessed.

How to create EC2 instance through boto python code

requests = [conn.request_spot_instances(price=0.0034, image_id='ami-6989a659', count=1,type='one-time', instance_type='m1.micro')]
I used the following code. But it is not working.
Use the following code to create instance from python command line.
import boto.ec2
conn = boto.ec2.connect_to_region(
"us-west-2",
aws_access_key_id="<aws access key>",
aws_secret_access_key="<aws secret key>",
)
conn = boto.ec2.connect_to_region("us-west-2")
conn.run_instances(
"<ami-image-id>",
key_name="myKey",
instance_type="t2.micro",
security_groups=["your-security-group-here"],
)
To create an EC2 instance using Python on AWS, you need to have "aws_access_key_id_value" and "aws_secret_access_key_value".
You can store such variables in config.properties and write your code in create-ec2-instance.py file
Create a config.properties and save the following code in it.
aws_access_key_id_value='YOUR-ACCESS-KEY-OF-THE-AWS-ACCOUNT'
aws_secret_access_key_value='YOUR-SECRETE-KEY-OF-THE-AWS-ACCOUNT'
region_name_value='region'
ImageId_value = 'ami-id'
MinCount_value = 1
MaxCount_value = 1
InstanceType_value = 't2.micro'
KeyName_value = 'name-of-ssh-key'
Create create-ec2-instance.py and save the following code in it.
import boto3
def getVarFromFile(filename):
import imp
f = open(filename)
global data
data = imp.load_source('data', '', f)
f.close()
getVarFromFile('config.properties')
ec2 = boto3.resource(
'ec2',
aws_access_key_id=data.aws_access_key_id_value,
aws_secret_access_key=data.aws_secret_access_key_value,
region_name=data.region_name_value
)
instance = ec2.create_instances(
ImageId = data.ImageId_value,
MinCount = data.MinCount_value,
MaxCount = data.MaxCount_value,
InstanceType = data.InstanceType_value,
KeyName = data.KeyName_value)
print (instance[0].id)
Use the following command to execute the python code.
python create-ec2-instance.py

Using Stored Twitter access_tokens with Twitterizer

I am using C3 & the latest twitterizer api. I have managed to get the user to authenticate & authorize my twitter application after which I persist only the access_token, access_token_secret and access_token_verifier.
The problem I have now is that when the user returns ( at a later stage, cookies removed / expired ), they identify themselves using our own credentials system, and then I attempt to see if their twitter credentials are still valid. I do this by calling the following method
OAuthTokens t = new OAuthTokens();
t.ConsumerKey = "XXX"; // my applications key
t.ConsumerSecret = "XXX";// my applications secret
t.AccessToken = "XXX";// the users token from the DB
t.AccessTokenSecret = "XXX";//the users secret from the DB
TwitterResponse<TwitterUser> resp = TwitterAccount.VerifyCredentials(tokens);
This is the error I get : "error":"Could not authenticate with OAuth.","request":"/1/account/verify_credentials.json"
I know my tokens are valid because if I call this method :
TwitterResponse<TwitterUser> showUserResponse = TwitterUser.Show(tokens, CORRECT_SCREEN_NAME_HERE);
with my screen name passed in and the same OAuth tokens, it returns correctly.
Any Ideas?
C# -> v4.0.30319
Twitterizer -> 2.4.0.2028
In your code, you're defining tokens as t, but when you call VerifyCredentials you're passing it tokens. Is that just an error in your sample code?

Resources