How to use SAS url at directory level in ADLS Gen2 to get contents of folder using python - azure-blob-storage

I have sas url at directory level and want to use to read contents of directory instead of using connection string

Follow below Syntax:
Create mount
dbutils.fs.mount(
source = "wasbs://<container_name>#<storage_account_name>.blob.core.windows.net/",
mount_point = "/mnt/t123",
extra_configs = {"fs.azure.sas.<container_name>.<storage_account_name>.blob.core.windows.net":"Your_SAS_token"})
Read csv file
file_location ="wasbs://<container_name>#<storage_account_name>.blob.core.windows.net/filename.csv"
df = spark.read.format("csv").option("inferSchema", "true").option("header", "true").option("delimiter",",").load(file_location)
display(df)
Reference:
Reading and Writing data in Azure Data Lake Storage Gen 2 with Azure Databricks by Ryan Kennedy
Mount ADLS Gen2 storage account provided by Microsoft

Related

Transferring google bucket file to end user without saving file locally

Right now when client download file from my site, I'm:
Downloading file from google cloud bucket to server (GCP download file, GCP streaming download)
Saving downloaded file to a Ruby Tempfile
sending Tempfile to enduser using Rails 5 send_file
I would like to skip step 2, to somehow transfer/stream file from google cloud to enduser without the file being saved at my server- is that possible?
Note the google bucket is private.
Code I'm currently using:
# 1 getting file from gcp:
storage = Google::Cloud::Storage.new
bucket = storage.bucket bucket_name, skip_lookup: true
gcp_file = bucket.file file_name
# 2a creates tempfile
temp_file = Tempfile.new('name')
temp_file_path = temp_file.path
# 2b populate tempfile with gcp file content:
gcp_file.download temp_file_path
# 3 sending tempfile to user
send_file(temp_file, type: file_mime_type, filename: 'filename.png')
What I would like:
# 1 getting file from gcp:
storage = Google::Cloud::Storage.new
bucket = storage.bucket bucket_name, skip_lookup: true
gcp_file = bucket.file file_name
# 3 sending/streaming file from google cloud to client:
send_file(gcp_file.download, type: file_mime_type, filename: 'filename.png')
Since making your objects or your bucket publicly readable or accessible is not an option for your project, the best option that I could suggest is using signed URLs so that you can still have control over your objects or bucket and also giving users sufficient permission to perform specific actions like download objects in your GCS bucket.

bulk import pdf files into oracle table

i have multiple folders on my disk and each folder has pdf files (4 files in each folder). How can i insert files in each folder in oracle table rows. the folder name will make primary key (being unique social svc #). i have used code as is from this link but i get following error:-
ora-22285 non-existent directory or file for fileopen operation
ora-06512 at sys.dbns_lob line 805
i ve also granted all permissions on the directory to my user with command:-
grant all on directory blob_dir to testuser
pl tell me what am i doing wrong.
if you going to use BLOB data type then you can upload data from external file using SQL*Loader. In case you are going to use BFILE then you just need to copy files into Oracle Server file system and grant access to it via DIRECTORY object with READ privelege. BFILE provides read only access to external files via SQL.

How to export user lists and passwords in synology NAS

I would like to know there is methods to export user lists and passwords in synology NAS device
Local
See user Greenstream's answer in the Synology Forum:
Download config backup file from the Synology
Change file extension from .cfg to .gzip
Unzip the file using 7-Zip or another utility that can extract from gzip archives
Download and install DB Browser for SQL LIte from http://sqlitebrowser.org/
Open the extracted ‘_Syno_ConfBkp.db’ file in DB Browser for SQL Lite
From the top menu bar select File, then Export, then Export as csv
In the export dialog select the table confbkp_user_tb
In the options
a. select column names in first line, Field separator character ,
b. Quote character "
c. New line characters ‘Windows: CR+LF(\r\n)’
Save the file to your desktop and open in Excel
LDAP
Based on ldap2csv.py and How to retrieve all the attributes of LDAP database to determine the available attributes, using python-ldap:
#!/usr/bin/python
import ldap
host = 'ldap://[ip]:389' # [ip]: The ip/name of the NAS, using the default port
dn = 'uid=[uid],cn=[cn],dc=[dc]' # LDAP Server Settings: Authentication Information / Bind DN
pw = '[password]' # LDAP Server Settings: Password
base_dn = 'dc=[dc]' # LDAP Server Settings: Authentication Information / Base DN
filter = '(uid=*)' # Get all users
attrs = ['cn', 'uid', 'uidNumber', 'gidNumber', 'homeDirectory', 'userPassword', 'loginShell', 'gecos', 'description']
con = ldap.initialize(host)
con.simple_bind_s(dn, pw)
res = con.search_s(base_dn, ldap.SCOPE_SUBTREE, filter, attrs)
con.unbind()
print(res)
The used ports can be found here.

Laravel 5: How to copy (stream) a file from Amazon S3 to FTP?

I have to move large content, which I don't want to put into memory from AWS S3 to FTP with Laravel's filesystem.
I know how to stream local content to S3, but haven't found a solution yet from S3 to FTP.
The closest I found was this, but I'm stuck in adapting it for my case.
Here is what's missing in my code (??):
$inputStream = Storage::disk('s3')->getDriver()->??
$destination = Storage::disk('ftp')->getDriver()->??
Storage:disk('ftp')->getDriver()->putStream($destination, $inputStream);
I think I found a solution:
$input = Storage::disk('s3')->getDriver();
$output = Storage::disk('ftp')->getDriver();
$output->writeStream($ftp_file_path, $input->readStream($s3_file_path));

Move files to HDFS using Spring XD

How to move the files from local disk to HDFS using Spring XD.
I do not want contents , but to move whole file for archival which saves the file with original name and content.
Here is what i have tried
stream create --name fileapple --definition "file --mode=ref --dir=/Users/dev/code/open/learnspringxd/input --pattern=apple*.txt | WHATTODOHERE"
I can see now with reference the file names with full path are made available , how to move that to HDFS.
You might want to check this which imports data from files to HDFS as a batch job and check if that fits your requirement. You can also check file | hdfs as a stream if that works for you.
example like below will load the file from data folder to HDFS and save the file by date folders(if there are multi records with different date) which by the record column named LastModified, the data file is a json file separate by lines.
file --mode=ref --dir=/Users/dev/code/open/learnspringxd/input --pattern=apple*.txt | hdfs --directory=/user/file_folder --partitionPath=path(dateFormat('yyyy-MM-dd',#jsonPath(payload,'$.LastModified'),'yyyy-MM-dd')) --fileName=output_file_name_prefix --fsUri=hdfs://HDFShostname.company.com:8020 --idleTimeout=30000

Resources