How to connect to on-premise Oracle-db using Azure Functions? - oracle

I'm trying to create an Azure Function to connect and query an on-premise Oracle-DB. I can not see that the Oracle Client or an ODBC-driver is installed on the servers to handle this.
Are there any solutions to this using JS or Python?
I have tried using the node-odbc driver, but the server is missing the Oracle client.

Partially answer for connecting to on-premise service like Oracle-db from Azure Functions, there is an existing SO thread How to Azure function configure for Site-to-Site Connectivity? had answered it, which you can refer to. So first, you must make sure networking access to on-premise server available.
Then, if you want to query oracle database via odbc, the oracle odbc driver must be installed on the client-side. However, the oracle odbc driver is a commerce componet, which you need to pay for getting it, and install it manually in Azure Functions. So even you want to use JS or Python to connect it, I think using Java with Oracle jdbc driver is a better solution from Azure Functions to connect Oracle DB to avoid the additional installation.
The other way I thought is to deploy a REST API app as proxy on your on-premise server to handle the query request from Azure Functions with JS or Python to help connecting Oracle DB.

Create and deploy your code to Azure Functions as a custom Docker container using a Linux base image. Install the Oracle client libraries in custom image. Check out my blog on the topic:
https://www.ravitella.com/azure-function-container/

You will need to establish a hybrid connection. see https://learn.microsoft.com/en-us/azure/app-service/app-service-hybrid-connections
Then you can refer to this thread to establish a context:
https://social.msdn.microsoft.com/Forums/en-US/2434fe10-3219-4625-bdbf-b9e96421230b/entity-framework-with-oracle-and-sql
For a nodejs solution, I would add the NPM and install it using Kudu using node-oracledb:
https://oracle.github.io/node-oracledb/doc/api.html#getstarted
// myscript.js
// This example uses Node 8's async/await syntax.
var oracledb = require('oracledb');
var mypw = ... // set mypw to the hr schema password
async function run() {
let connection;
try {
connection = await oracledb.getConnection( {
user : "hr",
password : mypw,
connectString : "localhost/XEPDB1"
});
let result = await connection.execute(
`SELECT manager_id, department_id, department_name
FROM departments
WHERE manager_id = :id`,
[103], // bind value for :id
);
console.log(result.rows);
} catch (err) {
console.error(err);
} finally {
if (connection) {
try {
await connection.close();
} catch (err) {
console.error(err);
}
}
}
}
run();
sample code

Related

On Prem to Cloud with Data Factory

I have one On Prem Oracle database and one Azure SQL Database and want to use a Copy Data activity to transfer this data.
I have now created a Self Hosted IR for the Oracle database, and I am able to connect to it and preview data from Data Factory editor:
I have a Azure SQL Database that I want to recieve data, and it is set up with AutoResolveIntegrationRuntime, with Connection successful. I am also able to preview data from this database:
When I try to run this Copy Data activity I get following error message:
ErrorCode=SqlFailedToConnect,'Type=Microsoft.DataTransfer.Common.Shared.HybridDeliveryException,Message=Cannot connect to SQL Database: 'sqlsrv', Database: 'database', User: 'user'. Check the linked service configuration is correct, and make sure the SQL Database firewall allows the integration runtime to access.
Based on all Docs/Tutorials I have read this should not be failing. I have tried to open up the SQL Server to allow all IP adresses in firewall rules.
Any ideas what I might be doing wrong here?
Since the integration runtime is a bridge across on prem and cloud , you need to check whether you are able to access the onprem database and the Azure SQL database through the VM in which the IR is installed.
The VM hosting the IR should be able to access both the source and sink for the copy activity to be successful in case if either of source or sink is using the self hosted runtime.
So the issue is not w.r.t Azure SQL database but in the VM hosting the IR

Connect Aurora Severless from tableau desktop and tableau server

It's a follow up question that I asked earlier here on Stack Overflow
Not able connect Amazon Aurora Serverless from SQL client
I found a cool hack that is working perfectly for my development purpose with some tweaks and I know I should not use this on my production environment.
So as we know Aurora Serverless works only inside VPC. So make sure you are attempting to connect to Aurora within the VPC and the security group assigned to the Aurora cluster has the appropriate rules to allow access. As I mention earlier that I already have an EC2 instance, Aurora Serverless and a VPC around both. So I can access it from my EC2 but not from my local pc/ local sql client. To fix that I did below two steps.
1. To access from any client(Navicat in my case),
a. First need to add GENERAL db configurations like aurora endpoint host, username, password etc.
b. Then, need to add SSH configuration, like EC2 machine username, hostip and .pem file path
2. To access from project,
First I create a ssh tunnel from my terminal like this way,
ssh ubuntu#my_ec2_ip_goes_here -i rnd-vrs.pem -L 5555:database-1.my_aurora_cluster_url_goes_here.us-west-2.rds.amazonaws.com:5432
Then run my project with db configuration like this way test.php,
$conn = pg_connect("host=127.0.0.1 port=5555 dbname=postgres user=postgres password=password_goes_here");
// other code goes here to get data from your database
if (!$conn) {
echo "An error occurred.\n";
exit;
}
$result = pg_query($conn, "SELECT * FROM brands");
if (!$result) {
echo "An error occurred.\n";
exit;
}
while ($row = pg_fetch_row($result)) {
echo "Brand Id: $row[0] Brand Name: $row[1]";
echo "<br />\n";
}
So what is my question now?
I need to connect my aurora serverless from tableau desktop and tableau server. For tableau desktop I used the same ssh tunneling and it works but how do I do it for tableau server?
Tableau Server does not support centralized data sources. You need to connect Tableau Desktop to your data source, and publish it to Tableau Server when you want to expose the data through the UI.
https://community.tableau.com/thread/111282

Hyperledger fabric client credential store using CouchDB

I am using Hyperledger Fabric SDK for node.js to enroll a user. I am using this code to deploy in fabric. It uses FileKeyValueStore (uses files to store the key values) to store client's user credential.
I want to use CouchDBKeyValueStore to store user key in CouchDB database instance. What changes do i need to make in client connection profile configuration file for credential store and in code to do so. Any link to sample code will also help.
There is no built-in support in the connection profile for using the CouchDBKeyValueStore, but you can still use the connection profile for the rest of the Fabric network configuration. You'll then need to use the Client APIs to configure the stores. Something like
const Client = require('fabric-client');
const CDBKVS = require('fabric-client/lib/impl/CouchDBKeyValueStore.js');
var client = Client.loadFromConfig('test/fixtures/network.yaml');
// Set the state store
let stateStore = await new CDBKVS({url: 'https://<USERNAME>:<PASSWORD>#<URL>', name: '<DB_NAME>'})
client.setStateStore(stateStore);
// Set the crypto store
const crypto = Client.newCryptoSuite();
let cryptoKS = Client.newCryptoKeyStore(
CDBKVS,
{
url: 'https://<USERNAME>:<PASSWORD>#<URL>.cloudant.com',
name: '<DB_NAME>'
}
);
crypto.setCryptoKeyStore(cryptoKS);
client.setCryptoSuite(crypto);
Official document Reference
Store Hyperledger Fabric certificates and keys in IBM Cloudant with Fabric Node SDK

SAP HANA hostname jdbc driver

I am trying to connect my java programme to the hana database. However, I am unable to do so because I have to connect my programme to the database through a url which I don't know. I registered for a hana trial online: https://account.hanatrial.ondemand.com. I created the account and the database and added it to the eclipse hana tools. How do I retrieve the url/servername/ipaddress that I have to use in place of HDB_URL
I used this to connect the hana cloud system http://saphanatutorial.com/add-sap-hana-cloud-system-in-hana-studio-or-eclipse
And I am trying to do this http://saphanatutorial.com/sap-hana-text-analysis-using-twitter-data/
package com.saphana.startupfocus.util;
import java.sql.*;
import com.saphana.startupfocus.config.Configurations;
public class HDBConnection {
public static Connection connection = null;
public static Connection getConnection() {
try {
if(null == connection){
connection = DriverManager.getConnection(Configurations.HDB_URL,
Configurations.HDB_USER, Configurations.HDB_PWD);
}
} catch (SQLException e) {
e.printStackTrace();
}
return connection;
}
// Test HDB Connection
public static void main(String[] argv) throws ClassNotFoundException {
connection = HDBConnection.getConnection();
if (connection != null) {
try {
System.out.println("Connection to HANA successful!");
Statement stmt = connection.createStatement();
ResultSet resultSet = stmt
.executeQuery("Select 'helloworld' from dummy");
resultSet.next();
String hello = resultSet.getString(1);
System.out.println(hello);
} catch (SQLException e) {
e.printStackTrace();
}
}
}
}
Since you try to connect to a SAP HANA cloud instance, you cannot directly connect via an URL to the instance.
Instead, you need to use a "Database Tunnel" as explained in the documentation for SAP HANA Cloud.
In SAP HANA Studio this is not required, because SAP HANA Studio automatically handles the database tunnel for you when connecting to a cloud system.
I had faced the same issue and figured out a solution for that.
Please refer to the following link
HOW to connect to sap hana cloud instance thru jdbc
Let me know if you have any questions.
You can download the SAP HANA SDK and use the included neo-tool to establish a tunnel-connection to your trial-instance:
neo open-db-tunnel
-h hanatrial.ondemand.com
-i <dbinstance> -a <p-accounttrial> -u <pusername>
This gives you something like this:
SAP HANA Cloud Platform Console Client
Password for your user:
Opening tunnel...
Tunnel opened.
Use these properties to connect to your schema:
Host name : localhost
Database type : HANAMDC
JDBC Url : jdbc:sap://localhost:30015/
Instance number : 00
Use any valid database user for the tunnel.
This tunnel will close automatically in 24 hours or when you close the shell.
Press ENTER to close the tunnel now.
Now you can connect to your cloud-database via your local tunnel on port 30015 and use the specified JDBC Url.
If you are working with a trial/productive SAP cloud platform account, a better way to do it now would be through using a service channel.
You need to install the SAP Cloud connector and create a service channel. To do so, open cloud connector, go to "On-premise to Cloud" option and select the option for HANA Database. Once this is done, you can use localhost as the hostname (since it is routed through the cloud connector) and the port would be provided by the same.

Emulate Oracle DB server to provide external callback

I am trying to think of a solution to provide my own callback to events which can only be pushed to an Oracle Database server (configuration parameters are ip, port, user, pass) by an external server.
I have been told that, if i did have an Oracle DB server, i could provide my callback using a script like this :
open SQL connection.
exec my_own_callback("parameter1");
close conn;
Is there any available tool to emulate an Oracle Database server just enough so that the function my_own_callback, provided in a shared library, is called whenever an event is pushed by the external platform ?
I have searched the web for such alternatives, but could not find any (yet ?).

Resources