I am creating an Amazon RDS Microsoft SQL Server database using an AWS CloudFormation template. It took some time to create and I connected to the database successfully, but it sometimes throws this exception:
Service was unable to open new database connection when requested.
SqlException: A connection was successfully established with the server, but then an error occurred during the pre-login handshake. (provider: SSL Provider, error: 0 - The wait operation timed out.)
Here is the code which I am using to create database using AWS template:
"ProductDatabase":{
"Type":"AWS::RDS::DBInstance",
"Condition":"isNewDatabase",
"Properties":{
"AllocatedStorage":{
"Ref":"AllocatedStorage"
},
"DBInstanceClass":"db.m4.large",
"Port":"1433",
"StorageType":"gp2",
"BackupRetentionPeriod":"7",
"DBName":"",
"DBSubnetGroupName": {
"Ref": "DatabaseSubnetGroup"
},
"MasterUsername":{
"Ref":"DBUserName"
},
"MasterUserPassword":{
"Ref":"DBPassword"
},
"Engine":"sqlserver-se",
"EngineVersion":"13.00.4422.0.v1",
"LicenseModel":"license-included",
"VPCSecurityGroups" : [ { "Ref" : "rdsDBSecurityGroup" } ],
"Tags": [
{
"Key": "Name",
"Value": { "Ref": "AWS::StackName" }
}
]
}
},
Related
I have installed the Oracle Data Access Components 12.2c (both 32bit and 64bit) on my SQL Server machine.
With this components I was able to directly import tables from an Oracle database into the tabular model at design time, providing a configuration name (CODASREP) form the tnsnames.ora file as the connection string and the database credentials. Full process of the model at design time was no problem. I'm using the integrated workspace.
After I deployed the model to the SSAS server, I wonted to process it and got this error:
Failed to save modifications to the server. Error returned:
'OLE DB or ODBC error: [DataSource.Error] The provider being used is deprecated: 'System.Data.OracleClient requires Oracle client software version 8.1.7 or greater.'.
Please visit https://go.microsoft.com/fwlink/p/?LinkID=272376 to install the official provider.
It seems, that the SSAS server is using the Microsoft provider, and not the installed Oracle provider, when processing the deployed Model directly on the server.
The connection XMLA is:
{
"createOrReplace": {
"object": {
"database": "CODAS_DIRECT",
"dataSource": "Oracle/CODASREP"
},
"dataSource": {
"type": "structured",
"name": "Oracle/CODASREP",
"connectionDetails": {
"protocol": "oracle",
"address": {
"server": "CODASREP"
},
"authentication": null,
"query": null
},
"options": {
"hierarchicalNavigation": true
},
"credential": {
"AuthenticationKind": "UsernamePassword",
"kind": "Oracle",
"path": "codasrep",
"Username": "user_name"
}
}
}
}
How do i force SSAS to use the Oracle provider when processing the deployed model?
Im working on cognos dashboard embedded using the reference from -
Cognos Dashboard embedded.
but instead of csv i'm working on JDBC data sources.
i'm trying to connect to JDBC data source as -
"module": {
"xsd": "https://ibm.com/daas/module/1.0/module.xsd",
"source": {
"id": "StringID",
"jdbc": {
"jdbcUrl": "jdbcUrl: `jdbc:db2://DATABASE-HOST:50000/YOURDB`",
"driverClassName": "com.ibm.db2.jcc.DB2Driver",
"schema": "DEFAULTSCHEMA"
},
"user": "user_name",
"password": "password"
},
"table": {
"name": "ROLE",
"description": "description of the table for visual hints ",
"column": [
{
"name": "ID",
"description": "String",
"datatype": "BIGINT",
"nullable": false,
"label": "ID",
"usage": "identifier",
"regularAggregate": "countDistinct",
},
{
"name": "NAME",
"description": "String",
"datatype": "VARCHAR(100)",
"nullable": true,
"label": "Name",
"usage": "identifier",
"regularAggregate": "countDistinct"
}
]
},
"label": "Module Name",
"identifier": "moduleId"
}
Note - here my database is hosted on private network on not hosted on public IP address.
So when i add the above code to add datasources, then the data is not loading from my DB,
even though i mentioned correct user and password for jdbc connection in above code then also when i drag and drop any field from data sources then it opens a pop up and which asks me for userID and Password.
and even after i filled userID and Password details again in popup i'm still unable to load the data.
Errors -
1 . when any module try to fetch data then calls API -
'https://dde-us-south.analytics.ibm.com/daas/v1/data?moduleUrl=%2Fda......'
but in my case this API is failing and giving the error - Status Code: 403 Forbidden
In SignOnDialog.js
At line - 98 call for saveDataSourceCredential method fails and it says saveDataSourceCredential is not a function.
Expectation -
It should not open a pop to asks for userID and password. and data will load directly just as it happens for database hosted on public IP domains.
This does not work in general. If you are using any type of functionality hosted outside your network that needs to access an API or data on your private network, there needs to be some communication channel.
That channel could be established by setting up a VPN, using products like IBM Secure Gateway to create a client / server connection between the IBM Cloud and your Db2 host, or by even setting up a direct link between your company network and the (IBM) cloud.
How to express the scheme(https) in the model.json file, this is for connection to Elasticsearch?
The following is the model.js file:
{
"version": "1.0",
"defaultSchema": "elasticsearch",
"schemas": [
{
"type": "custom",
"name": "elasticsearch",
"factory": "org.apache.calcite.adapter.elasticsearch.ElasticsearchSchemaFactory",
"operand": {
"coordinates": "{'127.0.0.1': 9200}",
"index": "myIndex",
"useConig": "{}"
}
}
]
}
In the following JAVA code I am trying to connect to Elasticsearch:
Connection conn = DriverManager.getConnection("jdbc:calcite:", properties);
calciteConnection = conn.unwrap(CalciteConnection.class);
String elasticSchema = Resources.toString(somefile.class.getResource("/model.json"), Charset.defaultCharset());
new ModelHandler(calciteConnection, "inline:" + elasticSchema);
String sql = “select field1 from table1”
statement2 = calciteConnection.prepareStatement(sql);
ResultSet set = statement2.executeQuery();
The get a connection closed exception and I can see in the log that it was trying to connect to http not https(desired). Where do I mention https in the model file?
Calcite's Elasticsearch adapter doesn't currently support connections via HTTPS. I'd suggest you open up an issue on Calcite's JIRA. Or better yet, contribute the necessary code changes yourself :)
I followed the steps documented here to convert my existing ARM template to use the commonname setting instead of thumbprint. The deployment was successful and I was able to connect to the Service Fabric Explorer using my browser after the typical certificate selection popup. Next, I tried to deploy an application to the cluster just like I had been previously. Even though I can see the cluster connection endpoint URI in the VS public service fabric application dialog, VS fails to connect to the cluster. Before, I would get a prompt to permit VS to access the local certificate. Does anyone know how to get VS to deploy an application to a service fabric cluster setup using the certificate common name?
Extracts from the MS link above:
"virtualMachineProfile": {
"extensionProfile": {
"extensions": [`enter code here`
{
"name": "[concat('ServiceFabricNodeVmExt','_vmNodeType0Name')]",
"properties": {
"type": "ServiceFabricNode",
"autoUpgradeMinorVersion": true,
"protectedSettings": {
"StorageAccountKey1": "[listKeys(resourceId('Microsoft.Storage/storageAccounts', variables('supportLogStorageAccountName')),'2015-05-01-preview').key1]",
"StorageAccountKey2": "[listKeys(resourceId('Microsoft.Storage/storageAccounts', variables('supportLogStorageAccountName')),'2015-05-01-preview').key2]"
},
"publisher": "Microsoft.Azure.ServiceFabric",
"settings": {
"clusterEndpoint": "[reference(parameters('clusterName')).clusterEndpoint]",
"nodeTypeRef": "[variables('vmNodeType0Name')]",
"dataPath": "D:\\SvcFab",
"durabilityLevel": "Bronze",
"enableParallelJobs": true,
"nicPrefixOverride": "[variables('subnet0Prefix')]",
"certificate": {
"commonNames": [
"[parameters('certificateCommonName')]"
],
"x509StoreName": "[parameters('certificateStoreValue')]"
}
},
"typeHandlerVersion": "1.0"
}
},
and
{
"apiVersion": "2018-02-01",
"type": "Microsoft.ServiceFabric/clusters",
"name": "[parameters('clusterName')]",
"location": "[parameters('clusterLocation')]",
"dependsOn": [
"[concat('Microsoft.Storage/storageAccounts/', variables('supportLogStorageAccountName'))]"
],
"properties": {
"addonFeatures": [
"DnsService",
"RepairManager"
],
"certificateCommonNames": {
"commonNames": [
{
"certificateCommonName": "[parameters('certificateCommonName')]",
"certificateIssuerThumbprint": ""
}
],
"x509StoreName": "[parameters('certificateStoreValue')]"
},
...
I found the solution for Visual Studio. I needed to add/update to the PublishProfiles/Cloud.xml file. I replaced ServerCertThumbprint with ServerCommonName, and then used the certificate CN for the new property and the existing FindValue property. Additionally, I changed the property for FindType to FindBySubjectName. I am now able to successfully connect and publish my application to the cluster.
<ClusterConnectionParameters
ConnectionEndpoint="sf-commonnametest-scus.southcentralus.cloudapp.azure.com:19000"
X509Credential="true"
ServerCommonName="sfrpe2eetest.southcentralus.cloudapp.azure.com"
FindType="FindBySubjectName"
FindValue="sfrpe2eetest.southcentralus.cloudapp.azure.com"
StoreLocation="CurrentUser"
StoreName="My" />
I have an ADF pipeline that copies 34 tables from an on premise Oracle database to an Azure data lake store; 32 of these copy just fine on a daily basis, the other 2 consistenly fail with...
Copy activity met an internal service error.
For more information, provide this message to customer support. ErrorCode: 8601 GatewayNodeName=XXXXXXXX,
ErrorCode=SystemErrorOdbcWrapperError,'Type=Microsoft.DataTransfer.Common.Shared.HybridDeliveryException,
Message=Unknown error from wrapper.,
Source=Microsoft.DataTransfer.ClientLibrary.Odbc.OdbcConnector,
''Type=Microsoft.DataTransfer.ClientLibrary.Odbc.Runtime.ValueException,Message=[DataSource.Error] The ODBC driver returned an invalid value.,Source=Microsoft.DataTransfer.ClientLibrary.Odbc.Wrapper,'.
The activity JSON is templated so is identical for all 34 activities. I can run the oracleReaderQuery in Oracle SQL Developer using the same connection details and credentials and get results.
Searches for this have shown 1 unanswered question on here (StackOverflow) and another Microsoft with a response that says "We will get back to you ASAP when we have new updates"....but there are no updates.
It seems I am not the only one having this issue; has anyone found a solution?
I have tried to do a one off copy in ADF but get the same result; I have tried copying the table to blob storage and get the same result.
Can anyone help me try to fathom what is wrong with this please?
The activity JSON is as follows...
{
"type": "Copy",
"typeProperties": {
"source": {
"type": "OracleSource",
"oracleReaderQuery": "SELECT stuff FROM <source table>"
},
"sink": {
"type": "AzureDataLakeStoreSink",
"writeBatchSize": 0,
"writeBatchTimeout": "00:00:00"
}
},
"inputs": [
{
"name": "<source table dataset>"
},
{
"name": "<scheduling dependency dataset>"
}
],
"outputs": [
{
"name": "<destination dataset>"
}
],
"policy": {
"timeout": "02:00:00",
"concurrency": 1,
"retry": 3,
"longRetry": 2,
"longRetryInterval": "03:00:00",
"executionPriorityOrder": "OldestFirst"
},
"scheduler": {
"frequency": "Day",
"interval": 1
},
"name": "Copy Activity 34",
"description": "copy activity"
}
As I said though, this is identical, apart from the table it is accessing, to the 32 activities that work perfectly fine.
What's the data type of stuff in your table?