I want to connect to Azure cosmos DB in ruby file, Please can anyone suggest how can I connect with the DB , is there any gem available, to which I can pass connection string of that DB .
Thanks
The following info worked for me, I found it here Connect to CosmosDB using RUBY
If you have not already, first specify connection properties in an ODBC DSN (data source name). This is the last step of the driver installation. You can use the Microsoft ODBC Data Source Administrator to create and configure ODBC DSNs.
You need the following info from CosmosDB
AccountEndpoint: The Cosmos DB account URL from the Keys blade of the Cosmos DB account
AccountKey: In the Azure portal, navigate to the Cosmos DB service and select your Azure Cosmos DB account. From the resource menu, go to the Keys page. Find the PRIMARY KEY value and set AccountKey to this value.
you will need to install the following gems
gem install dbi
gem install dbd-odbc
gem install ruby-odbc
The write something like this
#connect to the DSN
require 'DBI'
cnxn = DBI.connect('DBI:ODBC:CData CosmosDB Source','','')
#execute a SELECT query and store the result set
resultSet = cnxn.execute("SELECT City, CompanyName FROM Customers")
#display the names of the columns
resultSet.column_names.each do |name|
print name, "\t"
end
puts
#display the results
while row = resultSet.fetch do
(0..resultSet.column_names.size - 1).each do |n|
print row[n], "\t"
end
puts
end
resultSet.finish
#close the connection
cnxn.disconnect if cnxn
Since there is no prerequisite from your side in the question, I'm assuming you want to know the complete procedure to work on Azure CosmosDB using Ruby.
You need to create an Azure storage account and Azure CosmosDB account.
The easiest way to create an Azure storage account is by using the Azure portal. To learn more, see Create a storage account.
To create an Azure Cosmos DB Table API account, see Create a database account.
To use Azure Storage or Azure Cosmos DB, you must download and use the Ruby Azure package that includes a set of convenience libraries that communicate with the Table REST services.
Use RubyGems to obtain the package
Use a command-line interface such as PowerShell (Windows), Terminal (Mac), or Bash (Unix).
Type gem install azure-storage-table in the command window to install the gem and dependencies.
Import the package
Use your favorite text editor, add the following to the top of the Ruby file where you intend to use Storage: require "azure/storage/table"
Add your connection string
You can either connect to the Azure storage account or the Azure Cosmos DB Table API account. Get the connection string based on the type of account you are using.
Add an Azure Storage connection
The Azure Storage module reads the environment variables AZURE_STORAGE_ACCOUNT and AZURE_STORAGE_ACCESS_KEY for information required to connect to your Azure Storage account. If these environment variables are not set, you must specify the account information before using Azure::Storage::Table::TableService with the following code:
Azure.config.storage_account_name = "your Azure Storage account"
Azure.config.storage_access_key = "your Azure Storage access key"
Add an Azure Cosmos DB connection
To connect to Azure Cosmos DB, copy your primary connection string from the Azure portal, and create a Client object using your copied connection string. You can pass the Client object when you create a TableService object:
common_client = Azure::Storage::Common::Client.create(storage_account_name:'myaccount', storage_access_key:'mykey', storage_table_host:'mycosmosdb_endpoint')
table_client = Azure::Storage::Table::TableService.new(client: common_client)
Please refer to link if required.
Related
I am trying to configure an external hive metastore for my azure synapse spark pool. The rational behind using external meta store is to share table definitions across databricks and synapse workspaces.
However, I am wondering if its possible to access backend data via the metastore. For example, can clients like PowerBI,tableau connect to external metastore and retrieve not just the metadata, but also the business data in the underlying tables?
Also what additional value does an external metastore provides ?
You can configure the external Hive Metadata in Synapse by creating a Linked Service for that external source and then query it in Synapse Serverless Pool.
Follow the below steps to connect with External Hive Metastore.
In Synapse Portal, go to the Manage symbol on the left side of the of the page. Click on it and then click on Linked Services. To create the new Linked Service, click on + New.
Search for Azure SQL Database or Azure Database for MySQL for the external Hive Metastore. Synapse supports these two Hive external metastore. Select and Continue.
Fill in all the required details like Name, Subscription, Server name, Database name, Username and Password and Test the connection.
You can test the connection with the Hive metadata using below code.
%%spark
import java.sql.DriverManager
/** this JDBC url could be copied from Azure portal > Azure SQL database > Connection strings > JDBC **/
val url = s"jdbc:sqlserver://<servername>.database.windows.net:1433;database=<databasename>;user=utkarsh;password=<password>;encrypt=true;trustServerCertificate=false;hostNameInCertificate=*.database.windows.net;loginTimeout=30;"
try {
val connection = DriverManager.getConnection(url)
val result = connection.createStatement().executeQuery("select * from dbo.persons")
result.next();
println(s"Successful to test connection. Hive Metastore version is ${result.getString(1)}")
} catch {
case ex: Throwable => println(s"Failed to establish connection:\n $ex")
}
Check the same in below snippet for your reference.
can clients like PowerBI,tableau connect to external metastore and retrieve not just the metadata, but also the business data in the underlying tables?
Yes, Power BI allows us to connect with Azure SQL Database using in-built connector.
In Power BI Desktop, go to Get Data, click on Azure and select Azure SQL Database. Click connect.
In the next step, go give the Server name in this format <utkarsh.database.windows.net>, database name, Username and Password and you can now access data in Power BI. Refer below image.
I am using Oracle NoSQL Cloud Service on OCI and I want to write a program using the Oracle NoSQL Database Python SDK.
I did a test using the OCI SDK, I am using instance-principal IAM vs creating config files with tenancy/user ocid and API private keys on the nodes which invoke the noSQL api calls
Is it possible to do a connection using instance-principal instead of creating config files with tenancy/user ocid and API private keys with the Oracle NoSQL Database Python SDK.
I read the examples provided in the documentation https://github.com/oracle/nosql-python-sdk but I cannot find information about instance-principal support
The Oracle NoSQL Database Python SDK works with instance-principals and resource principals. See the documentation https://nosql-python-sdk.readthedocs.io/en/stable/api/borneo.iam.SignatureProvider.html
Here an example using resource principals and Oracle functions
def get_handle():
provider = borneo.iam.SignatureProvider.create_with_resource_principal()
compartment_id = provider.get_resource_principal_claim(borneo.ResourcePrincipalClaimKeys.COMPARTMENT_ID_CLAIM_KEY)
config = borneo.NoSQLHandleConfig(os.getenv('NOSQL_REGION'), provider).set_logger(None).set_default_compartment(compartment_id)
return borneo.NoSQLHandle(config)
I want to explore Oracle data integrator , i am not able to understand what does 'Use credential File' option in Data server does in Oracle data integrator. If anyone can explain it would be helpful and i want to improve performance of my oracle data integrator script as well, any ideas on that as well.
Ok, now I think that I understood. You run ODI in Cloud.
You will need a credential File in order to connect to your database.
The way you obtain that credential file, is:
Credential files are downloaded from the ADW console to the ODI host in the Oracle Cloud Infrastructure (OCI).
Note: When ODI is deployed from the Marketplace, client credential folders are downloaded from autonomous databases that exist in the OCI compartment containing ODI.
If ADW is in a different compartment than ODI follow the steps below.
Download the Credentials
Connect to the ODI host using VNC. Refer to the Deployment blog above
for details.
Launch Firefox from the Applications>Favorites list.
Follow the steps in Downloading Autonomous Data Warehouse Credentials
to obtain the client credentials compressed folder containing the
wallet and network configuration files used by ODI to make the
connections.
The entire way of connecting is described here.
I am trying to create external data source in Azure Synapse Analytics (Azure SQL Data warehouse) to Oracle external database. I am using the following code in SSMS to do that:
CREATE MASTER KEY ENCRYPTION BY PASSWORD = 'myPassword';
CREATE DATABASE SCOPED CREDENTIAL MyCred WITH IDENTITY = 'myUserName', Secret = 'Mypassword';
CREATE EXTERNAL DATA SOURCE MyEXTSource
WITH (
LOCATION = 'oracle://<myIPAddress>:1521',
CREDENTIAL = MyCred
)
I am getting the following error:
CREATE EXTERNAL DATA SOURCE statement failed because the 'TYPE' option is not specified. Specify a value for the 'TYPE' option and try again.
I understand from the below that TYPE is not a required option for Oracle databases.
https://learn.microsoft.com/en-us/sql/t-sql/statements/create-external-data-source-transact-sql?view=azure-sqldw-latest
Not sure how what the problem is here, is this feature still not supported in Azure Synapse Analytics (Azure DW) when it is already available in MS SQL Server 2019? Any ideas are welcome.
Polybase has different versions across the different products with different capabilities. Most of these are described here:
The ability to connect to Oracle is only present in the SQL Server versions, currently 2019. The documentation is quite clear that is only applies to SQL Server and not to Azure Synapse Analytics (formerly Azure SQL Data Warehouse):
https://learn.microsoft.com/en-us/sql/relational-databases/polybase/polybase-configure-oracle?view=sql-server-ver15
In summary, Azure Synapse Analytics and its version of Polybase does not currently support to access external Oracle tables at this time.
I was making a web app to deploy using Heroku.com when I realized that the only database type they support is PostgreSQL. Up until now, my app (powered by the Ruby gem Sinatra) accessed the database via the .Sqlite method for the Sequel gem.
Here's my Ruby script from when I used Sequel to access the .db file via SQLite:
DB = Sequel.sqlite('mydatabase.db')
DB.create_table :mytable do
primary_key :id
String :column_name
end
I installed PostgreSQL after learning Heroku used only that. Here's the script via postgres (my username is literally 'postgress', though I obviously won't reveal my password in this question):
DB = Sequel.postgres('mydatabase.db',:user=>'postgres',:password=>'my_password_here',:host=>'localhost',:port=>5432,:max_connections=>10)
DB.create_table :mytable do
primary_key :id
String :column_name
end
However, when I run this code, I get the following error:
C:/Ruby193/lib/ruby/gems/1.9.1/gems/sequel-3.38.0/lib/sequel/adapters/postgres.rb:208:in 'initialize': PG::Error: FATAL: database "mydatabase.db" does not exist (Sequel::DatabaseConnectionError)
I've tried searching Google, StackOverflow, Sequel documents, and the Heroku help documents for any help, but I've found no fix to this problem.
Does anyone know what I am doing wrong?
The database mydatabase.db doesn't exist, as per the error message from Pg. Likely reasons:
You probably meant mydatabase without the SQLite-specific .db filename suffix
It's possible you created the db with different case, eg "Mydatabase.db";
You might be connecting to a different Pg server than you think you are
You never created the database. Unlke SQLite's default behaviour, Pg doesn't create databases when you try to connect to a database that doesn't exist yet.
If in doubt, connect to Pg with psql and run \l to list databases, or connect via PgAdmin-III.
The PostgreSQL documentation and tutorial are highly recommended, too. They're well written and will teach you a lot about SQL in general as well as Pg in particular.
BTW, the postgres user is a superuser. You should not be using it for your application; it's like running your server as root, ie a really bad idea. Create a new PostgreSQL user without superuser, createdb or createuser rights and use that for your application. You can either CREATE DATABASE somedb WITH OWNER myappuser - or preferably, create the database owned by a different user to your webapp user and then expicitly GRANT the webapp user the minimum required permissions. See user management and GRANT.
On heroku all you need to do is tell Sequel to connect to the content of the DATABASE_URL environment variable (which is a properly formed url that Sequel understands):
DB = Sequel.connect(ENV['DATABASE_URL'])